WO2019033957A1 - 交互位置确定方法、系统、存储介质和智能终端 - Google Patents

交互位置确定方法、系统、存储介质和智能终端 Download PDF

Info

Publication number
WO2019033957A1
WO2019033957A1 PCT/CN2018/099219 CN2018099219W WO2019033957A1 WO 2019033957 A1 WO2019033957 A1 WO 2019033957A1 CN 2018099219 W CN2018099219 W CN 2018099219W WO 2019033957 A1 WO2019033957 A1 WO 2019033957A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
target
gesture
image
interaction
Prior art date
Application number
PCT/CN2018/099219
Other languages
English (en)
French (fr)
Inventor
刘国华
Original Assignee
深圳市国华识别科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市国华识别科技开发有限公司 filed Critical 深圳市国华识别科技开发有限公司
Priority to JP2020508377A priority Critical patent/JP2020530631A/ja
Publication of WO2019033957A1 publication Critical patent/WO2019033957A1/zh
Priority to US16/791,737 priority patent/US11163426B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means

Definitions

  • the target position of the interactive control response area is determined according to the gesture image and the gesture image position, thereby improving the convenience of operation. .
  • An interactive location determining method includes:
  • the interaction control response region is activated according to the gesture image
  • the step of activating the interactive control response area according to the gesture image comprises:
  • the interaction location determining method further includes:
  • the step of determining a target display location according to a location of the face image in the smart terminal display screen includes:
  • the target display position is determined according to a preset rule according to current position information of each face image.
  • An interactive location determining system comprising:
  • a gesture recognition module configured to identify a gesture image in the target area, and obtain current location information corresponding to the gesture image in the display screen of the smart terminal;
  • An activation module configured to activate an interaction control response area according to the gesture image when detecting that the gesture corresponding to the gesture image is a preset activation gesture
  • a target location determining module configured to determine a target location of the interactive control response region according to the current location information.
  • the activation module is further configured to acquire current state information corresponding to the smart terminal, determine corresponding function information according to the current state information, and generate corresponding target interaction control response information according to the function information; And activating the target interaction control response region corresponding to the target interaction control response information according to the gesture image.
  • a control module configured to detect a gesture operation that is applied to the target location, trigger a corresponding interaction control instruction according to the gesture operation, and control the smart terminal to perform a corresponding operation according to the interaction control instruction.
  • the interactive location determining system further includes:
  • a face recognition module configured to identify a face image in the target area, and obtain first position information of the face image in the display screen of the smart terminal;
  • a processing module configured to acquire a face image size corresponding to the face image, and determine a distance between the current user and the smart terminal display screen according to the face image size
  • a display size determining module configured to acquire a preset distance range corresponding to the distance, and determine a target display size of the interaction information according to the preset distance range;
  • a display module configured to determine a target display location of the interaction information according to the first location information, and display the interaction information in the target display location according to the target display size.
  • the face recognition module is further configured to acquire the number of face images recognized in the target area
  • a face location information acquiring module configured to acquire current location information corresponding to each face image when the number of the face images is one or more;
  • the target display position determining module is configured to determine the target display position according to a preset rule according to current position information of each face image.
  • a computer readable storage medium having stored thereon a computer program, the computer program being executed by a processor, causing the processor to perform the steps of: recognizing a gesture image within a target area, and acquiring The current position information corresponding to the gesture image in the display screen of the smart terminal; when it is detected that the gesture corresponding to the gesture image is a preset activation gesture, the interaction control response region is activated according to the gesture image; according to the current location information Determining a target location of the interactive control response area.
  • An intelligent terminal comprising one or more processors, a memory and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors
  • the program is configured to perform the following steps: identifying a gesture image in the target area, and acquiring current location information corresponding to the gesture image in the smart terminal display screen; when detecting that the gesture corresponding to the gesture image is preset activation In the gesture, the interactive control response region is activated according to the gesture image; and the target location of the interactive control response region is determined according to the current location information.
  • the interactive location determining method, the system, the storage medium, and the smart terminal activate the interactive control response region by recognizing the gesture image in the target region, and determine the target location corresponding to the interactive control response region according to the current location information of the gesture image.
  • the target position corresponding to the interactive control response area is determined according to the current position of the recognized gesture image in the display screen of the smart terminal. As the current position of the gesture image changes, the target position of the interactive control response area changes correspondingly, which is convenient for the user to interactively control.
  • the response area operates to improve the convenience of operation.
  • FIG. 1 is a flow chart of a method for determining an interaction position in an embodiment
  • FIG. 2 is a flow chart of a method for determining an interaction position in an embodiment
  • FIG. 3 is a flow chart of a method for determining an interaction position in a specific embodiment
  • 4A is a schematic diagram of an intelligent terminal interaction control interface in an embodiment
  • FIG. 5 is a structural block diagram of an interaction location determining system in an embodiment
  • FIG. 6 is a structural block diagram of an interactive location determining system in an embodiment
  • FIG. 7 is a structural block diagram of an interaction position determining system in another embodiment
  • FIG. 8 is a diagram showing the internal structure of an intelligent terminal in an embodiment.
  • an interactive location determining method including the following:
  • Step S110 identifying a gesture image in the target area, and acquiring current position information corresponding to the gesture image in the display screen of the smart terminal.
  • the target area refers to the front area of the display screen of the smart terminal, and the range size of the target area is determined according to the collection angle of the image collection device.
  • the image collection device may be a device that is provided by the smart terminal, such as a camera of the smart terminal, or an external device that is connected to the smart terminal through the connection port.
  • the gesture image is an image derived from the movement or state of the human body, such as the posture and state of the human hand.
  • the smart terminal is a terminal that interacts with the user, and the smart terminal may be a smart device such as a smart TV, a tablet computer, or a mobile phone.
  • the display screen of the smart terminal may be a display screen carried by the smart terminal itself or an external display screen.
  • the current location information refers to the location information that the gesture image collected by the image acquisition device at the current time is mapped to the display screen, and can reflect the location information of the gesture image in real time.
  • the image capture device is used to collect the gesture image
  • the gesture image may be a static image or a dynamic image.
  • the collection device automatically searches for and captures the gesture image of the user, and according to the gesture.
  • the image feature gesture image is recognized, and according to the gesture analysis in the gesture recognition, the shape feature or the motion track of the gesture can be acquired, and the current position information corresponding to the gesture image currently in the display screen of the smart terminal is determined. Get the current position information of the recognized gesture image.
  • Step S120 When it is detected that the gesture corresponding to the gesture image is a preset activation gesture, the interaction control response region is activated according to the gesture image.
  • the gesture image may be a static gesture image or a dynamic gesture image.
  • the preset activation gesture may be preset according to requirements, such as setting a sliding gesture, a zoom gesture, a lifting gesture, etc. as a preset activation gesture, and the preset activation gesture may include a
  • the preset activation activation gesture library formed by the preset activation gesture may be stored in the server, and the collected gesture image is sent to the server for identification, or the preset activation gesture inventory may be stored in the intelligent terminal, directly The captured gesture image is identified.
  • the interactive control response area refers to an area capable of receiving user operations and triggering corresponding instructions according to user operations. Further, the interface of the interactive control response area may be rendered, and the function response information corresponding to the interactive control response area is displayed in the display of the smart terminal in the form of menu bar information, prompting the function corresponding to the interactive response area.
  • the menu bar information refers to a function bar of the computer and various terminals, including a plurality of function button information, which is convenient for the user to view.
  • the interaction control response area may also not perform interface rendering. After the interactive control response area is triggered according to the gesture image, there is no corresponding menu bar information display in the smart terminal display screen, but the area of the smart terminal corresponding to the interactive control response area can still be received. User action.
  • the gesture image is segmented and recognized, and the gesture corresponding to the currently recognized gesture image is compared with the preset activation gesture in the preset activation gesture library, and when the gesture corresponding to the gesture image recognized in the current target region is detected, When the preset activation gesture is the same, the interactive control response area in the display screen corresponding to the smart terminal is activated.
  • Step S130 determining a target location of the interactive control response area according to the current location information.
  • the target position of the interactive control response area is determined according to the current position information of the detected gesture image, for example, the corresponding position of the gesture image in the display screen of the smart terminal is used as the target position of the interactive control response area, and the area corresponding to the target position Ability to receive user actions. Further, if the function information corresponding to the target response area is displayed in the form of menu bar information through interface rendering, the display position of the menu bar information is the target position corresponding to the interactive control response area.
  • a mobile terminal control method is provided.
  • the interactive control response area is activated by identifying a gesture image in the target area, and the target position corresponding to the interactive control response area is determined according to current position information of the gesture image.
  • the target position corresponding to the interactive control response area is determined according to the current position of the recognized gesture image in the display screen of the smart terminal. As the current position of the gesture image changes, the target position of the interactive control response area changes correspondingly, which is convenient for the user to interactively control.
  • the response area operates to improve the convenience of operation.
  • step S120 includes: acquiring current state information corresponding to the smart terminal, determining corresponding function information according to current state information, generating corresponding target interaction control response information according to the function information, and activating target interaction control response information according to the gesture image.
  • the corresponding target interactive control response area includes: acquiring current state information corresponding to the smart terminal, determining corresponding function information according to current state information, generating corresponding target interaction control response information according to the function information, and activating target interaction control response information according to the gesture image.
  • the current state information refers to information that can reflect the current working state of the smart terminal, including content information currently played by the smart terminal, status information of the playing, and the like. Different status information of the intelligent terminal corresponds to different function information.
  • the interactive control response information refers to information capable of responding according to user operations and capable of controlling the smart terminal to perform related operations.
  • the current state information corresponding to the smart terminal is obtained, the corresponding function information is determined according to the current state information, and the corresponding target interaction control response information is generated according to the function information.
  • the corresponding function information is an available program list information, for example, generating corresponding program list information according to a program classification of military, entertainment, technology, etc. as an interactive control response information, which is convenient.
  • the user selects the program, and further, the face image can be identified, the registered user image is matched according to the recognized face image information, and the corresponding interactive control response information is generated according to the recorded interest type of the registered user.
  • the current state information of the smart terminal is the play interface of a certain program, the corresponding function information is fast forward, backward, volume addition and subtraction, program switching, etc., and corresponding interactive control response information is generated according to the corresponding function information combination.
  • the state information of the smart terminal and the interaction control response information may be directly associated with each other, and the corresponding interactive control response information is obtained through the current play state.
  • the target interaction control response area is determined according to the current position information of the recognized gesture image, and the target interaction control response area is an area that can respond to the function corresponding to the target interaction control response information.
  • the corresponding function information is determined according to the current state information of the smart terminal, and the corresponding target interaction control response information is generated according to the function information, and the corresponding interactive control response information can be generated according to the current working state of the smart terminal, so that the user can be based on the smart
  • the current working state of the terminal performs corresponding operations, which further improves the convenience of operation.
  • the method further includes: detecting a gesture operation acting on the target location, triggering a corresponding interaction control instruction according to the gesture operation; and controlling the smart terminal to perform a corresponding operation according to the interaction control instruction.
  • the interaction control response area includes multiple interaction control response sub-areas, and each of the interaction control response sub-areas respectively corresponds to different interaction control instructions.
  • the corresponding interaction control response sub-area is triggered, the corresponding interaction control instruction can be triggered to control the intelligence.
  • the terminal performs the corresponding operation.
  • the interaction control response area is activated, the user gesture motion track is detected or the touch operation corresponding to each interaction control response sub-area is detected.
  • the user gesture motion track is detected as the preset gesture track, the current gesture motion track is acquired.
  • the interactive control responds to the interactive control instruction corresponding to the sub-area.
  • the interaction control response information corresponding to the interaction control response area is a control volume addition and subtraction and a program switching.
  • an instruction for increasing the volume is triggered, and the smart terminal is controlled to display the volume information and increase the correspondence.
  • the volume of the user when detecting that the gesture motion track of the user is swinging upward, triggers a program switching instruction, and controls the smart terminal to display the program list and switch the display interface to the display interface corresponding to the next program of the current program.
  • the corresponding interaction control instruction may be generated according to the touch operation of the interaction control response sub-area. If the interaction control response information corresponding to the currently activated interaction control response area is the control volume addition and subtraction and the program switching, the current interaction control is performed.
  • the response area is divided into an interactive control response sub-region corresponding to four functions of volume increase, volume reduction, switching to the previous program, and switching to the next program, and setting a listen event in each interactive control response sub-area to listen for user operations.
  • the corresponding interaction control instruction is triggered according to the gesture operation acting on the interaction control response area, and the intelligent terminal is controlled to perform the related operation according to the interaction control instruction.
  • the corresponding interaction control instruction is triggered according to the gesture applied to the current interaction control response area, and the intelligent terminal is controlled according to the interaction control instruction to perform the corresponding operation, and the control of the intelligent terminal is implemented by using the gesture, and no additional tool assistance is needed, thereby further improving the intelligence.
  • the interaction location determining method further includes:
  • Step S210 identifying a face image in the target area, and acquiring first position information of the face image in the display screen of the smart terminal.
  • the target area refers to the front area of the display screen of the smart terminal, and the range size of the target area is determined according to the collection angle of the image collection device.
  • the image collection device may be a device that is provided by the smart terminal, such as a camera of the smart terminal, or an external device that is connected to the smart terminal through the connection port.
  • the gesture image is an image derived from the movement or state of the human body, such as the posture and state of the human hand.
  • the image collection device is used to collect the face image.
  • the collection device automatically searches for and captures the face image of the user according to the face feature in the collected face image.
  • the face image is identified and the position of the face image in the display of the smart terminal is determined.
  • the operation of recognizing the face image information may be performed simultaneously with or before the recognition of the gesture image, or simultaneously with the operation of recognizing the gesture image.
  • Step S220 Acquire a face image size in the identified target area, and determine a distance between the current user and the smart terminal display screen according to the face image size.
  • the face image size refers to the size of the face image in the display screen of the smart terminal.
  • the image collecting device collects the face image, and projects the collected face image into the display screen to obtain the face image size currently recognized in the target area, according to the imaging principle and the face of the image capturing device.
  • the image size calculates the distance the user is from the display of the smart terminal at this time.
  • the size of the current face image is captured in the camera, and the distance of the user from the camera is obtained according to the focal length of the camera, and the distance between the user and the display of the smart terminal is obtained according to the relationship between the camera and the display of the smart terminal, such as a camera.
  • the distance from the user to the camera is the distance from the user to the smart terminal.
  • Step S230 Obtain a preset distance range corresponding to the distance, and determine a target display size of the interaction information according to the preset distance range.
  • the preset distance range is a distance range of the user to the display screen of the smart terminal, and the preset distance range may be a plurality of individual distance thresholds or multiple distance ranges, and each preset distance range may be set as needed.
  • the interaction information refers to information displayed on the display screen of the smart terminal for interacting with the user, such as prompt information such as text or pictures in the smart terminal, and the prompt information may be display information that cannot be operated or operable information. .
  • the smart terminal After performing the corresponding operations according to the interaction control instruction, the smart terminal can display the interaction information corresponding to the current state of the smart terminal.
  • the display size refers to the size of the interactive information display.
  • the information display size corresponding to each preset distance range is established in advance, for example, when the distance is within the first preset distance range, the interaction information displays a default size, and when the distance is greater than the first preset distance range, the distance is smaller than the default size.
  • the interactive information display size is doubled and displayed.
  • the interaction information is The display size is doubled. The display size of the interactive information can be reduced or enlarged as the user moves.
  • a plurality of preset distance ranges and corresponding enlargement or reduction ratios may be set as needed, and the range of the preset distances may also be arbitrarily set as needed.
  • the distance between the current user and the display screen of the smart terminal is obtained, and the preset distance range corresponding to the current distance is determined, and the corresponding interactive information display size is determined according to the preset distance range.
  • Step S240 determining a target display position of the interaction information according to the first location information, and displaying the interaction information at the target display location according to the target display size.
  • the target display position of the interaction information is determined according to the current location information of the face image. For example, according to the current position information of the face image on the display screen of the smart terminal, the position information corresponding to the human eye is obtained as the target display position, and the text or picture prompt information is displayed at the position of the human eye corresponding to the face image, which is convenient for the user to view.
  • the target display position of the interaction information is determined according to the current information position of the gesture image, such as displaying the text or picture prompt information.
  • the current position of the gesture image is convenient for the user to view.
  • the face image and the gesture image may be combined to determine the display position of the interaction information, such as displaying the interaction information at the face image position and the gesture image position. In the middle position, it is convenient for users to view and operate.
  • the interaction information is adjusted to a size corresponding to the target display size, and the interactive information is displayed at the target display position in the target display position at the target display position.
  • the target display size determines the interactive information target display position according to the position information of the face image, and displays the interactive information in the target display size at the target display position.
  • the display size of the interactive information is adjusted according to the distance between the user and the display screen of the smart terminal, so that the user can read the interactive information, thereby further improving the convenience of operation.
  • step S240 includes: acquiring the number of face images recognized in the target area, and when the number of face images is one or more, acquiring current position information corresponding to each face image, according to each face image.
  • the current location information determines the target display location according to a preset rule.
  • the preset rule may be determined according to the reading habit of the user and the number of users in the target area.
  • the target display position when a plurality of face images are identified in the target area, the position coordinates corresponding to the respective face images are acquired, the intermediate position corresponding to each position coordinate is determined as the target display position, and the interaction information is displayed at the target position. If a face image is recognized on the left and right sides of the target area, the interactive information is displayed in the middle position, so that the users on the left and right sides can view at the same time. It should be noted that the target display position may be set according to actual needs, and is not necessarily the middle position of the face image.
  • the gesture information of the user may be combined with the face image information. If multiple face image information is detected, and the gesture information is detected, the text prompt information in the interaction information is displayed on multiple face images.
  • the position of the interactive information that can be operated is displayed at a position corresponding to the gesture image or all the interactive information is displayed at a position corresponding to the gesture image. If a plurality of gesture images are detected, the operable interactive information is determined to be displayed at the first detected gesture image position based on the chronological order of the detected gesture images. For example, if the first user of the two users first makes a gesture of raising a hand and appears within the target area, the interactive information in the interaction information or the interaction information is displayed on the corresponding image of the gesture image corresponding to the first user. s position.
  • the display size of the interaction information is determined by a preset distance range in which the distance corresponding to the user farthest from the smart terminal display screen is located.
  • the position information corresponding to each face image is acquired, and the target display position is determined according to the plurality of face image position information and the corresponding preset rule.
  • the target display location information is determined by combining multiple face image location information, so that multiple users can view or manipulate the interaction information, which further improves the convenience of operation.
  • an interactive location determining method including the following:
  • Step S301 identifying a gesture image in the target area, and acquiring current location information corresponding to the gesture image in the display screen of the smart terminal.
  • Step S302 Acquire current state information corresponding to the smart terminal, determine corresponding function information according to the current state information, and generate corresponding target interaction control response information according to the function information.
  • Step S303 When it is detected that the gesture corresponding to the gesture image is a preset activation gesture, the target interaction control response region corresponding to the target interaction control response information is activated according to the gesture image.
  • Step S304 determining a target location of the target interactive control response region according to the current location information.
  • Step S305 detecting a gesture operation acting on the target location, triggering a corresponding interaction control instruction according to the gesture operation, controlling the smart terminal to perform a corresponding operation according to the interaction control instruction, and displaying corresponding interaction information.
  • Step S306 identifying a face image in the target area.
  • step S307 it is determined whether the number of recognized face images is one or more. If yes, step S308 is performed, and if no, step S309 is performed.
  • Step S308 Acquire current position information corresponding to each face image, and determine a target display position of the interaction information according to a preset rule according to current position information of each face image.
  • Step S309 acquiring current location information corresponding to the face image, and determining a target display location of the interaction information according to the current location information.
  • Step S310 acquiring a face image size corresponding to the face image, and determining a distance between each current user and the smart TV display screen according to the face image size.
  • the distance between the user and the smart TV display screen is directly determined according to the current face image size. If the number of recognized face images is greater than one, each face image size is acquired to obtain the distance between each user and the smart TV display screen.
  • the interactive control response area corresponding to the current smart TV display screen is activated, and the interface display is performed on the interactive display control, and the corresponding display is displayed on the smart television display screen.
  • Menu bar information interface 410 is activated.
  • the volume adjustment information 420 is displayed on the display interface of the smart terminal display screen and the volume is decreased. Further, when the face image 400B and the face image 400C are recognized, the corresponding positions of the face image 400B and the face image 400C in the display screen are respectively determined, and the face image 400B and the face are determined.
  • the intermediate position of the image 400C is the interactive information target display position, and the interactive information 430 is displayed at the target display position. In other embodiments, the target display location of the interactive information can be set as desired.
  • an interactive location determination system 500 that includes the following:
  • a gesture recognition module 510 configured to identify a gesture image in the target area, and obtain current position information corresponding to the gesture image in the display screen of the smart terminal;
  • the face recognition module 550 is configured to identify a face image in the target area, and obtain first position information of the face image in the display screen of the smart terminal.
  • the processing module 560 is configured to obtain a face image size corresponding to the face image, and determine a distance between the current user and the display screen of the smart terminal according to the face image size.
  • the display size determining module 570 is configured to acquire a preset distance range corresponding to the distance, and determine a target display size of the interaction information according to the preset distance range.
  • the display module 580 is configured to determine a target display location of the interaction information according to the first location information, and display the interaction information at the target display location according to the target display size.
  • the face recognition module 550 is further configured to acquire the number of face images recognized in the target area. When the number of face images is one or more, the current position information corresponding to each face image is acquired.
  • the display module 580 is further configured to determine a target display location according to a preset rule according to current location information of each face image.
  • FIG. 8 is a schematic diagram showing the internal structure of an intelligent terminal in an embodiment.
  • the intelligent terminal includes a processor connected through a system bus, a non-volatile storage medium, an internal memory and a network interface, a display screen, and an input device.
  • the non-volatile storage medium of the smart terminal may store an operating system and a computer program, and when the computer program is executed, may cause the smart terminal to perform an interactive location determining method.
  • the processor of the intelligent terminal is used to provide calculation and control capabilities to support the operation of the entire intelligent terminal.
  • the network interface is used for network communication with the server, such as sending the recognized gesture image to the server, acquiring gesture image data stored by the server, and the like.
  • the display screen of the smart terminal may be a liquid crystal display or an electronic ink display screen
  • the input device may be a touch layer covered on the display screen, or may be a button, a trackball or a touchpad provided on the smart terminal casing, or may be An external keyboard, trackpad, or mouse.
  • the smart terminal can be a mobile phone, a tablet or a personal digital assistant or a wearable device.
  • FIG. 8 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation of the smart terminal to which the solution of the present application is applied.
  • the specific smart terminal may be It includes more or fewer components than those shown in the figures, or some components are combined, or have different component arrangements.
  • the interactive location determining system provided by the present application may be implemented in the form of a computer program executable on a smart terminal as shown in FIG. 8, and the non-volatile storage medium of the smart terminal may be stored.
  • the various program modules of the interactive location determining system such as the gesture recognition module 510, the activation module 520, and the target location determining module 530 in FIG.
  • Each of the program modules includes a computer program for causing the smart terminal to perform the steps in the interactive location determining method of the various embodiments of the present application described in the present specification, and the processor in the smart terminal can invoke the non-volatileness of the smart terminal.
  • the interactive program stored in the storage medium determines each program module of the system, runs a corresponding program, and implements functions corresponding to each module of the interactive position determining system in this specification.
  • the smart terminal can recognize the gesture image in the target area by using the gesture recognition module 510 in the interactive position determining system as shown in FIG. 5, and acquire the current position information corresponding to the gesture image in the display screen of the smart terminal, by using the activation module 520.
  • the interaction control response region is activated according to the gesture image, and the target location determining module 530 determines the target location of the interactive control response region according to the current location information.
  • a computer readable storage medium which activates an interactive control response area by recognizing a gesture image in a target area, and determines a target position corresponding to the interactive control response area according to current position information of the gesture image.
  • the target position corresponding to the interactive control response area is determined according to the current position of the recognized gesture image in the display screen of the smart terminal. As the current position of the gesture image changes, the target position of the interactive control response area changes correspondingly, which is convenient for the user to interactively control.
  • the response area operates to improve the convenience of operation.
  • the processor when the computer program is executed by the processor, the processor further performs the steps of: acquiring current state information corresponding to the smart terminal, determining corresponding function information according to the current state information, and generating corresponding target interaction control according to the function information. Response information; a target interactive control response region corresponding to the target interaction control response information is activated according to the gesture image.
  • the processor when the computer program is executed by the processor, the processor further performs the steps of: detecting a gesture operation acting on the target location, triggering a corresponding interaction control instruction according to the gesture operation; and controlling the smart terminal to perform the correspondence according to the interaction control instruction Operation.
  • the processor when the computer program is executed by the processor, the processor further causes the processor to: acquire the number of face images recognized in the target area; and acquire each face when the number of face images is one or more The current position information corresponding to the image; determining the target display position according to the preset rule according to the current position information of each face image.
  • An intelligent terminal comprising one or more processors, a memory and one or more programs, wherein one or more programs are stored in a memory and configured to be executed by one or more processors for execution The following steps: identifying a gesture image in the target area, and acquiring current position information corresponding to the gesture image in the smart terminal display; and when detecting that the gesture corresponding to the gesture image is a preset activation gesture, activating the interactive control response area according to the gesture image Determining the target position of the interactive control response area based on the current location information.
  • one or more programs are stored in the memory and configured to be executed by one or more processors, the program being further configured to perform the steps of: acquiring current state information corresponding to the smart terminal, according to current state information Determining corresponding function information, generating corresponding target interaction control response information according to the function information; and activating the target interaction control response area corresponding to the target interaction control response information according to the gesture image.
  • one or more programs are stored in a memory and configured to be executed by one or more processors, the program further for performing the steps of: detecting a gesture operation at a target location, triggering according to a gesture operation Corresponding interactive control instruction; controlling the intelligent terminal to perform a corresponding operation according to the interactive control instruction.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)
  • Image Analysis (AREA)

Abstract

本发明涉及一种交互位置确定方法,包括:识别目标区域内的手势图像,并获取所述手势图像在智能终端显示屏中对应的当前位置信息;当检测到所述手势图像对应的手势为预设激活手势时,根据所述手势图像激活交互控制响应区域;根据所述当前位置信息确定所述交互控制响应区域的目标位置。利用手势图像激活交互控制响应区域,并根据手势图像位置信息确定交互控制响应区域的位置,提高了操作的便利性。本发明还提供一种交互位置确定系统、存储介质和智能终端。

Description

交互位置确定方法、系统、存储介质和智能终端 技术领域
本发明涉及智能终端人机交互技术领域,特别是涉及一种交互位置确定方法、系统、存储介质和智能终端。
背景技术
随着科技水平的不断发展和人们生活质量的不断提高,电视逐渐趋向于智能化发展,为了满足用户的多样化需求,智能电视中的应用越来越多,而传统的智能电视中的菜单栏或提示信息通常显示在固定位置,当智能电视的显示屏幕较大时,导致用户不方便操作菜单栏,影响用户与智能电视的交互效果。
发明内容
基于此,有必要针对上述问题,提供一种交互位置确定方法、系统、存储介质和智能终端,基于手势识别,根据手势图像和手势图像位置确定交互控制响应区域的目标位置,提高操作的便利性。
一种交互位置确定方法,包括:
识别目标区域内的手势图像,并获取所述手势图像在智能终端显示屏中对应的当前位置信息;
当检测到所述手势图像对应的手势为预设激活手势时,根据所述手势图像激活交互控制响应区域;
根据所述当前位置信息确定所述交互控制响应区域的目标位置。
在一个实施例中,所述根据所述手势图像激活交互控制响应区域的步骤包括:
获取所述智能终端对应的当前状态信息,根据所述当前状态信息确定对应的功能信息,根据所述功能信息生成对应的目标交互控制响应信息;
根据所述手势图像激活所述目标交互控制响应信息对应的目标交互控制响应区域。
在一个实施例中,所述根据所述当前位置信息确定所述交互控制响应区域的目标位置的步骤之后,包括:
检测作用于所述目标位置的手势操作,根据所述手势操作触发对应的交互控制指令;
根据所述交互控制指令控制所述智能终端执行对应的操作。
在一个实施例中,所述交互位置确定方法还包括:
识别目标区域内的人脸图像,并获取所述人脸图像在所述智能终端显示屏中的第一位置信息;
获取所述人脸图像对应的人脸图像尺寸;
根据所述人脸图像尺寸确定当前用户与所述智能终端显示屏之间的距离;
获取所述距离对应的预设距离范围,根据所述预设距离范围确定交互信息的目标显示尺寸;
根据所述第一位置信息确定所述交互信息的目标显示位置,根据所述目标显示尺寸在所述目标显示位置显示所述交互信息。
在一个实施例中,所述根据所述人脸图像在所述智能终端显示屏中的位置确定目标显示位置的步骤,包括:
获取目标区域内识别到的人脸图像的数目;
当所述人脸图像的数目为一个以上时,获取各个人脸图像对应的当前位置信息;
根据各个人脸图像的当前位置信息按照预设规则确定目标显示位置。
一种交互位置确定系统,包括:
手势识别模块,用于识别目标区域内的手势图像,并获取所述手势图像在智能终端显示屏中对应的当前位置信息;
激活模块,用于当检测到所述手势图像对应的手势为预设激活手势时,根据所述手势图像激活交互控制响应区域;
目标位置确定模块,用于根据所述当前位置信息确定所述交互控制响应区域的目标位置。
在一个实施例中,所述激活模块还用于获取所述智能终端对应的当前状态信息,根据所述当前状态信息确定对应的功能信息,根据所述功能信息生成对 应的目标交互控制响应信息;根据所述手势图像激活所述目标交互控制响应信息对应的目标交互控制响应区域。
在一个实施例中,所述交互位置确定系统还包括:
控制模块,用于检测作用于所述目标位置的手势操作,根据所述手势操作触发对应的交互控制指令,根据所述交互控制指令控制所述智能终端执行对应的操作。
在一个实施例中,所述交互位置确定系统还包括:
人脸识别模块,用于识别目标区域内的人脸图像,并获取所述人脸图像在所述智能终端显示屏中的第一位置信息;
处理模块,用于获取所述人脸图像对应的人脸图像尺寸,根据所述人脸图像尺寸确定当前用户与所述智能终端显示屏之间的距离;
显示尺寸确定模块,用于获取所述距离对应的预设距离范围,根据所述预设距离范围确定交互信息的目标显示尺寸;
显示模块,用于根据所述第一位置信息确定所述交互信息的目标显示位置,根据所述目标显示尺寸在所述目标显示位置显示所述交互信息。
在一个实施例中,所述人脸识别模块还用于获取目标区域内识别到的人脸图像的数目;
人脸位置信息获取模块,用于当所述人脸图像的数目为一个以上时,获取各个人脸图像对应的当前位置信息;
目标显示位置确定模块,用于根据各个人脸图像的当前位置信息按照预设规则确定目标显示位置。
一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时,使得所述处理器执行以下步骤:识别目标区域内的手势图像,并获取所述手势图像在智能终端显示屏中对应的当前位置信息;当检测到所述手势图像对应的手势为预设激活手势时,根据所述手势图像激活交互控制响应区域;根据所述当前位置信息确定所述交互控制响应区域的目标位置。
一种智能终端,包括一个或多个处理器,存储器以及一个或多个程序,其中所述一个或多个程序被存储在所述存储器中并且被配置成由所述一个或多个 处理器执行,所述程序用于执行以下步骤:识别目标区域内的手势图像,并获取所述手势图像在智能终端显示屏中对应的当前位置信息;当检测到所述手势图像对应的手势为预设激活手势时,根据所述手势图像激活交互控制响应区域;根据所述当前位置信息确定所述交互控制响应区域的目标位置。
上述交互位置确定方法、系统、存储介质和智能终端,通过识别目标区域内的手势图像激活交互控制响应区域,并根据手势图像的当前位置信息确定交互控制响应区域对应的目标位置。交互控制响应区域对应的目标位置根据识别到的手势图像在智能终端显示屏中的当前位置确定,随着手势图像当前位置的变化,交互控制响应区域的目标位置发生相应变化,便于用户对交互控制响应区域进行操作,提高了操作的便利性。
附图说明
图1为一个实施例中交互位置确定方法的流程图;
图2为一个实施例中交互位置确定方法的流程图;
图3为一个具体实施例中交互位置确定方法的流程图;
图4为一个实施例中智能终端的交互控制界面图;
图4A为一个实施例中智能终端交互控制界面的示意图;
图5为一个实施例中交互位置确定系统的结构框图;
图6为一个实施例中交互位置确定系统的结构框图;
图7为另一个实施例中交互位置确定系统的结构框图;
图8为一个实施例中智能终端的内部结构图。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
如图1所示,在一个实施例中,提供一种交互位置确定方法,包括以下内容:
步骤S110,识别目标区域内的手势图像,并获取手势图像在智能终端显示 屏中对应的当前位置信息。
其中,目标区域是指智能终端的显示屏前方区域,目标区域的范围大小根据图像采集设备的采集角度来确定。图像采集设备可以为智能终端自带的设备,如智能终端的摄像机等,也可以为通过连接端口接入到智能终端的外置设备。手势图像为源自人身体运动或状态的图像,如人手的姿势和状态等。
其中,智能终端为与用户进行交互的终端,智能终端可以是智能电视、平板电脑、手机等智能设备,智能终端显示屏可以为智能终端自身携带的显示屏也可以是外接的显示屏。当前位置信息是指图像采集设备当前时刻采集到的手势图像映射到显示屏中的位置信息,能够实时反映手势图像的位置信息。
具体地,利用图像采集设备采集手势图像,采集到手势图像可以为静态图像或动态图像,当用户手势在采集设备的目标区域范围内时,采集设备自动搜索并拍摄用户的手势图像,并根据手势图像特征手势图像进行识别,根据手势识别中的手势分析能够获取手势的形状特征或运动轨迹,确定手势图像当前在智能终端显示屏中对应的当前位置信息。获取识别到的手势图像的当前位置信息。
步骤S120,当检测到手势图像对应的手势为预设激活手势时,根据手势图像激活交互控制响应区域。
其中,手势图像可以为静态手势图像也可以为动态手势图像,预设激活手势可以根据需要预先设置,如设置滑动手势、缩放手势、抬举手势等为预设激活手势,预设激活手势可以包括一个或多个,预设激活手势形成的预设激活手势库可以存储在服务器中,将采集到的手势图像发送至服务器中进行识别,也可以将预设激活手势库存储在智能终端中,直接对采集到的手势图像进行识别。
交互控制响应区域是指能够接收用户操作并根据用户操作触发对应指令的区域。进一步地,可以对交互控制响应区域进行界面渲染,将交互控制响应区域对应的功能响应信息以菜单栏信息的形式显示在智能终端显示屏中,提示交互响应区域对应的功能。其中,菜单栏信息是指计算机及各种终端的一种功能栏,包括多种功能按键信息,便于用户查看。交互控制响应区域也可以不进行界面渲染,根据手势图像触发交互控制响应区域后,在智能终端显示屏中没有对应的菜单栏信息的显示,但智能终端对应于交互控制响应区域的区域仍然能 够接收用户操作。
具体地,对手势图像进行分割识别,将当前识别到的手势图像对应的手势与预设激活手势库中的预设激活手势进行对比,当检测到当前目标区域内识别到的手势图像对应的手势与预设激活手势相同时,激活智能终端对应的显示屏中的交互控制响应区域。
步骤S130,根据当前位置信息确定交互控制响应区域的目标位置。
具体地,根据检测到的手势图像的当前位置信息确定交互控制响应区域的目标位置,例如,将手势图像在智能终端显示屏中对应的位置作为交互控制响应区域的目标位置,目标位置对应的区域能够接收用户操作。进一步地,若将目标响应区域对应的功能信息通过界面渲染以菜单栏信息的形式显示,则菜单栏信息的显示位置为交互控制响应区域对应的目标位置。
本实施例中,提供一种移动终端控制方法,通过识别目标区域内的手势图像激活交互控制响应区域,并根据手势图像的当前位置信息确定交互控制响应区域对应的目标位置。交互控制响应区域对应的目标位置根据识别到的手势图像在智能终端显示屏中的当前位置确定,随着手势图像当前位置的变化,交互控制响应区域的目标位置发生相应变化,便于用户对交互控制响应区域进行操作,提高了操作的便利性。
在一个实施例中,步骤S120包括:获取智能终端对应的当前状态信息,根据当前状态信息确定对应的功能信息,根据功能信息生成对应的目标交互控制响应信息;根据手势图像激活目标交互控制响应信息对应的目标交互控制响应区域。
其中,当前状态信息是指能够反映智能终端当前工作状态的信息,包括智能终端当前播放的内容信息,播放的状态信息等。智能终端的不同的状态信息对应不同的功能信息。交互控制响应信息是指能够根据用户操作进行响应并能够控制智能终端执行相关操作的信息。
具体地,获取智能终端对应的当前状态信息,根据当前状态信息确定对应的功能信息,根据功能信息生成对应的目标交互控制响应信息。例如,智能终端的当前状态信息为初始选择状态,则对应的功能信息为可供选择的节目列表信息,如按照军事、娱乐、科技等节目分类生成对应的节目列表信息作为交互 控制响应信息,便于用户对节目进行选择,进一步地,也可对人脸图像进行识别,根据识别到的人脸图像信息匹配已注册用户,并根据记录的已注册用户的感兴趣类型生成对应的交互控制响应信息。若智能终端当前状态信息为某一节目的播放界面,则对应的功能信息为快进、后退、音量加减、节目切换等,根据对应的功能信息组合生成对应的交互控制响应信息。
在其他实施例中,也可将智能终端的状态信息与交互控制响应信息直接建立关联关系,通过当前播放状态获取对应的交互控制响应信息。
进一步地,根据识别到的手势图像的当前位置信息确定目标交互控制响应区域,目标交互控制响应区域为能够对目标交互控制响应信息对应的功能进行响应的区域。
本实施例中,根据智能终端的当前状态信息确定对应的功能信息,根据功能信息生成对应的目标交互控制响应信息,能够根据智能终端的当前工作状态生成对应的交互控制响应信息,便于用户根据智能终端当前工作状态进行相应操作,进一步地提高了操作的便利性。
在一个实施例中,步骤S130之后,还包括:检测作用于目标位置的手势操作,根据手势操作触发对应的交互控制指令;根据交互控制指令控制智能终端执行对应的操作。
其中,交互控制响应区域包括多个交互控制响应子区域,各个交互控制响应子区域分别对应不同的交互控制指令,当触发对应的交互控制响应子区域时,能够触发对应的交互控制指令,控制智能终端执行对应的操作。
具体地,当激活交互控制响应区域后,检测用户手势运动轨迹或检测各个交互控制响应子区域对应的触控操作,当检测到用户手势运动轨迹为预设手势轨迹时,获取当前手势运动轨迹对应的交互控制响应子区域对应的交互控制指令。例如,交互控制响应区域对应的交互控制响应信息为控制音量加减和节目切换,当检测到用户的手势运动轨迹为向左摆动时,触发音量增加的指令,控制智能终端显示音量信息并增加对应的音量,当检测到用户的手势运动轨迹为向上摆动时,触发节目切换指令,控制智能终端显示节目列表并将显示界面切换至当前节目的下一个节目对应的显示界面。
进一步地,也可根据交互控制响应子区域的触控操作生成对应的交互控制 指令,如当前激活的交互控制响应区域对应的交互控制响应信息为控制音量加减和节目切换,则将当前交互控制响应区域分为音量增加、音量减少、切换至上一个节目以及切换至下一个节目四个功能对应的交互控制响应子区域,并在各个交互控制响应子区域设置监听事件,监听用户操作。当用户在音量增加功能对应的交互控制响应子区域的位置操作时,该交互控制响应子区域对应的监听事件监听到用户操作时,生成对应的增加音量指令,控制智能终端显示音量信息并增加对应的音量。
本实施例中,当激活交互控制响应区域后,根据作用于交互控制响应区域的手势操作触发对应的交互控制指令,根据交互控制指令控制智能终端执行相关操作。根据作用于当前交互控制响应区域的手势触发对应的交互控制指令,根据交互控制指令控制智能终端执行相应的操作,利用手势实现对智能终端的控制,不需要额外的工具辅助,进一步提高了对智能终端操作的便利性。
如图2所示,在一个实施例中,交互位置确定方法还包括:
步骤S210,识别目标区域内的人脸图像,并获取人脸图像在智能终端显示屏中的第一位置信息。
其中,目标区域是指智能终端的显示屏前方区域,目标区域的范围大小根据图像采集设备的采集角度来确定。图像采集设备可以为智能终端自带的设备,如智能终端的摄像机等,也可以为通过连接端口接入到智能终端的外置设备。手势图像为源自人身体运动或状态的图像,如人手的姿势和状态等。
具体地,利用图像采集设备采集人脸图像,当用户人脸在采集设备的目标区域范围内时,采集设备自动搜索并拍摄用户的人脸图像,根据采集到的人脸图像中的人脸特征对人脸图像进行识别,并确定人脸图像在智能终端显示屏中的位置。
进一步地,识别人脸图像信息的操作可以在识别手势图像之后或之前,也可与识别手势图像的操作同时执行。
步骤S220,获取识别到的目标区域内的人脸图像尺寸,根据人脸图像尺寸确定当前用户与智能终端显示屏之间的距离。
其中,人脸图像尺寸是指人脸图像在智能终端显示屏中的大小。
具体地,图像采集设备采集人脸图像,并将采集到的人脸图像投影到显示 屏中显示,获取当前在目标区域内识别到的人脸图像尺寸,根据图像采集设备的成像原理和人脸图像尺寸计算用户此时距离智能终端显示屏的距离。
例如,根据摄像机成像原理获取当前人脸图像在摄像机中成像的大小并根据摄像机的焦距得到用户距离摄像机的距离,根据摄像机与智能终端显示屏的关系获取用户与智能终端显示屏的距离,如摄像机安装在智能终端显示屏上,则用户距离摄像机的距离即为用户距离智能终端的距离。
步骤S230,获取距离对应的预设距离范围,根据预设距离范围确定交互信息的目标显示尺寸。
其中,预设距离范围为用户到智能终端显示屏的距离范围,预设距离范围可以为多个单独的距离阈值或多个距离范围,各个预设距离范围可以根据需要设置。交互信息是指在智能终端显示屏中显示的用于与用户进行交互的信息,如智能终端中的文字或图片等提示信息,提示信息可以为不能进行操作的显示信息也可以为可操作的信息。智能终端根据交互控制指令执行相应操作后,能够显示智能终端当前状态对应的交互信息。显示尺寸是指交互信息显示的大小。
具体地,预先建立各个预设距离范围对应的信息显示尺寸,如当距离在第一预设距离范围内时,交互信息显示默认尺寸,相对于默认尺寸,当距离大于第一预设距离范围小于第二预设距离范围时,交互信息显示尺寸放大一倍显示,当距离大于第二预设距离范围时,交互信息显示尺寸放大两倍显示,当距离小于第一预设距离范围时,交互信息显示尺寸缩小一倍。交互信息的显示尺寸随着用户的移动可以缩小或放大。
需要说明的是,可以根据需要设置多个预设距离范围以及对应的放大或缩小倍数,预设距离的范围也可以根据需要任意设置。
获取当前用户与智能终端显示屏的距离,判断当前距离对应的预设距离范围区间,根据预设距离范围确定对应的交互信息显示尺寸。
步骤S240,根据第一位置信息确定交互信息的目标显示位置,根据目标显示尺寸在目标显示位置显示交互信息。
具体地,当采集到人脸图像时,获取到人脸图像在智能终端显示屏中的当前位置信息,根据人脸图像的当前位置信息确定交互信息的目标显示位置。如根据人脸图像在智能终端显示屏的当前位置信息,获取人眼对应的位置信息作 为目标显示位置,将文字或图片提示信息显示在人脸图像对应的人眼位置处,便于用户查看。
在其他实施例中,若未检测到人脸图像仅检测到激活交互控制响应区域的手势图像,则根据手势图像的当前信息位置确定交互信息的目标显示位置,如将文字或图片提示信息显示在手势图像对应的当前位置,便于用户查看。
进一步地,在其他实施例中,在检测到人脸图像的基础上,可以将人脸图像与手势图像结合确定交互信息的显示位置,如将交互信息显示在人脸图像位置与手势图像位置的中间位置处,便于用户查看与操作。
将交互信息调整为目标显示尺寸对应的尺寸,在目标显示位置将交互信息以目标显示尺寸显示在目标显示位置。
本实施例中,通过识别目标区域内的人脸图像信息,并确定人脸图像对应的位置信息,根据目标区域内的人脸图像尺寸确定用户与显示屏之间的距离,从而确定交互信息的目标显示尺寸,根据人脸图像的位置信息确定交互信息目标显示位置,在目标显示位置以目标显示尺寸显示交互信息。根据用户与智能终端显示屏的距离调节交互信息的显示尺寸,方便用户阅读交互信息,进一步提高了操作的便利性。
在一个实施例中,步骤S240包括:获取目标区域内识别到的人脸图像的数目,当人脸图像的数目为一个以上时,获取各个人脸图像对应的当前位置信息,根据各个人脸图像的当前位置信息按照预设规则确定目标显示位置。
其中,预设规则可以根据用户的阅读习惯和目标区域内的用户数目确定。
具体地,当在目标区域内识别到多个人脸图像时,获取各个人脸图像对应的位置坐标,将各个位置坐标对应的中间位置确定为目标显示位置,将交互信息显示在目标位置。如在目标区域的左侧和右侧分别识别到一个人脸图像,则在中间位置显示交互信息,方便左右两侧的用户同时查看。需要说明的是,目标显示位置可以根据实际需要设置,不一定为人脸图像的中间位置。
进一步地,也可以将用户的手势信息与人脸图像信息相结合,如检测到多个人脸图像信息,并且检测到手势信息,则将交互信息中的文字提示信息显示在多个人脸图像共同确定的位置,将能够操作的交互信息显示在手势图像对应的位置或将所有的交互信息全部显示在手势图像对应的位置。若检测到多个手 势图像,则根据检测到的手势图像的时间顺序确定能够操作的交互信息显示在第一个检测到的手势图像位置。如,两个用户中第一用户首先做出了举手的手势且出现在目标区域范围内,则将交互信息或者交互信息中的能够操作的交互信息显示在第一用户对应的手势图像的对应的位置。
进一步地,当存在多个用户时,以与智能终端显示屏距离最远的用户对应的距离所在的预设距离范围确定交互信息的显示尺寸。
本实施例中,当检测到目标区域内的人脸图像大于一个时,获取各个人脸图像对应的位置信息,根据多个人脸图像位置信息和对应的预设规则确定目标显示位置。结合多个人脸图像位置信息确定目标显示位置信息,方便多个用户查看或操作交互信息,进一步地提高了操作的便利性。
如图3所示,在一个具体地实施例中,以智能终端为智能电视为例,提供一种交互位置确定方法,包括以下内容:
步骤S301,识别目标区域内的手势图像,并获取手势图像在智能终端显示屏中对应的当前位置信息。
步骤S302,获取智能终端对应的当前状态信息,根据当前状态信息确定对应的功能信息,根据功能信息生成对应的目标交互控制响应信息。
步骤S303,当检测到手势图像对应的手势为预设激活手势时,根据手势图像激活目标交互控制响应信息对应的目标交互控制响应区域。
步骤S304,根据当前位置信息确定目标交互控制响应区域的目标位置。
步骤S305,检测作用于目标位置的手势操作,根据手势操作触发对应的交互控制指令,根据交互控制指令控制智能终端执行对应的操作,显示对应的交互信息。
步骤S306,识别目标区域内的人脸图像。
步骤S307,判断识别到的人脸图像的数目是否为一个以上,若是,则执行步骤S308,若否,则执行步骤S309。
步骤S308,获取各个人脸图像对应的当前位置信息,根据各个人脸图像的当前位置信息按照预设规则确定交互信息的目标显示位置。
步骤S309,获取人脸图像对应的当前位置信息,根据当前位置信息确定交互信息的目标显示位置。
步骤S310,获取人脸图像对应的人脸图像尺寸,根据人脸图像尺寸确定当前各个用户与智能电视显示屏之间的距离。
具体地,当识别到的人脸图像的数目为一个时,则直接根据当前人脸图像尺寸确定用户与智能电视显示屏之间的距离。若识别到的人脸图像的数目大于一个时,则获取各个人脸图像尺寸获取各个用户与智能电视显示屏之间的距离。
步骤S311,根据各个用户与智能电视显示屏之间的最远距离确定交互信息的目标显示尺寸,根据目标显示尺寸在目标显示位置显示交互信息。
具体地,当识别到的人脸图像的数目为一个时,则直接根据当前人脸图像与智能电视显示屏之间的距离确定交互信息的目标显示尺寸。若识别到的人脸图像的数目大于一个时,则将获取到的各个用户与智能电视显示屏之间的距离中的最远距离、作为目标距离,根据目标距离确定交互信息的显示尺寸。
如图4所示,当在智能终端的当前播放状态检测到手势图像400A时,激活当前智能电视显示屏对应的交互控制响应区域,并对交互显示控制进行界面渲染,在智能电视显示屏显示对应的菜单栏信息界面410。
如图4A所示,当检测到作用于菜单栏界面410的手势触发减少音量指令时,在智能终端显示屏的显示界面显示音量调节信息420并减少音量。进一步地,识别人脸图像,当识别到人脸图像400B和人脸图像400C时,分别确定人脸图像400B和人脸图像400C在显示屏中对应的位置,并确定人脸图像400B和人脸图像400C的中间位置为交互信息目标显示位置,在目标显示位置显示交互信息430。在其他实施例中,交互信息的目标显示位置可以根据需要设置。
本实施例中,通过识别目标区域内的手势图像激活交互控制响应区域,并根据手势图像的当前位置信息确定交互控制响应区域对应的目标位置,通过识别人脸图像确定交互信息显示的位置和尺寸。交互控制响应区域对应的目标位置根据识别到的手势图像在智能终端显示屏中的当前位置确定,随着手势图像当前位置的变化,交互控制响应区域的目标位置发生相应变化,便于用户对交互控制响应区域进行操作,交互信息根据人脸图像的位置发生变化并根据人脸图像的大小确定交互信息的显示尺寸,便于用户查看交互信息,提高了操作的便利性。
如图5所示,在一个实施例中,提供一种交互位置确定系统500,包括以下 内容:
手势识别模块510,用于识别目标区域内的手势图像,并获取手势图像在智能终端显示屏中对应的当前位置信息;
激活模块520,用于当检测到手势图像对应的手势为预设激活手势时,根据手势图像激活交互控制响应区域;
目标位置确定模块530,用于根据当前位置信息确定交互控制响应区域的目标位置。
本实施例中,提供一种交互位置确定系统,通过识别目标区域内的手势图像激活交互控制响应区域,并根据手势图像的当前位置信息确定交互控制响应区域对应的目标位置。交互控制响应区域对应的目标位置根据识别到的手势图像在智能终端显示屏中的当前位置确定,随着手势图像当前位置的变化,交互控制响应区域的目标位置发生相应变化,便于用户对交互控制响应区域进行操作,提高了操作的便利性。
在一个实施例中,激活模块520还用于获取智能终端对应的当前状态信息,根据当前状态信息确定对应的功能信息,根据功能信息生成对应的目标交互控制响应信息;根据手势图像激活目标交互控制响应信息对应的目标交互控制响应区域。
如图6所示,在一个实施例中,交互位置确定系统还包括:控制模块540,用于检测作用于目标位置的手势操作,根据手势操作触发对应的交互控制指令,根据交互控制指令控制智能终端执行对应的操作。
如图7所示,在一个实施例中,交互位置确定系统500还包括:
人脸识别模块550,用于识别目标区域内的人脸图像,并获取人脸图像在智能终端显示屏中的第一位置信息。
处理模块560,用于获取人脸图像对应的人脸图像尺寸,根据人脸图像尺寸确定当前用户与智能终端显示屏之间的距离。
显示尺寸确定模块570,用于获取距离对应的预设距离范围,根据预设距离范围确定交互信息的目标显示尺寸。
显示模块580,用于根据第一位置信息确定交互信息的目标显示位置,根据目标显示尺寸在目标显示位置显示交互信息。
在一个实施例中,人脸识别模块550还用于获取目标区域内识别到的人脸图像的数目,当人脸图像的数目为一个以上时,获取各个人脸图像对应的当前位置信息。
显示模块580,还用于根据各个人脸图像的当前位置信息按照预设规则确定目标显示位置。
如图8所示,为一个实施例中智能终端的内部结构示意图。该智能终端包括通过系统总线连接的处理器、非易失性存储介质、内存储器和网络接口、显示屏和输入装置。其中,智能终端的非易失性存储介质可存储操作系统和计算机程序,该计算机程序被执行时,可使得智能终端执行一种交互位置确定方法。该智能终端的处理器用于提供计算和控制能力,支撑整个智能终端的运行。网络接口用于与服务器进行网络通信,如发送识别的手势图像至服务器,获取服务器存储的手势图像数据等。智能终端的显示屏可以是液晶显示屏或者电子墨水显示屏等,输入装置可以是显示屏上覆盖的触摸层,也可以是智能终端外壳上设置的按键、轨迹球或触控板,也可以是外接的键盘、触控板或鼠标等。该智能终端可以是手机、平板电脑或者个人数字助理或穿戴式设备等。
本领域技术人员可以理解,图8中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的智能终端的限定,具体的智能终端可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,本申请提供的交互位置确定系统可以实现为一种计算机程序的形式,计算机程序可在如图8所示的智能终端上运行,智能终端的非易失性存储介质可存储组成该交互位置确定系统的各个程序模块,比如图5中的手势识别模块510、激活模块520、目标位置确定模块530。各个程序模块中包括计算机程序,计算机程序用于使智能终端执行本说明书中描述的本申请各个实施例的交互位置确定方法中的步骤,智能终端中的处理器能够调用智能终端的非易失性存储介质中存储的交互位置确定系统的各个程序模块,运行对应的程序,实现本说明书中交互位置确定系统的各个模块对应的功能。例如,智能终端可以通过如图5所示的交互位置确定系统中的手势识别模块510识别目标区域内的手势图像,并获取手势图像在智能终端显示屏中对应的当前位置信息, 通过激活模块520当检测到手势图像对应的手势为预设激活手势时,根据手势图像激活交互控制响应区域,并通过目标位置确定模块530根据当前位置信息确定交互控制响应区域的目标位置。
一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,计算机程序被处理器执行时,使得处理器执行以下步骤:识别目标区域内的手势图像,并获取手势图像在智能终端显示屏中对应的当前位置信息;当检测到手势图像对应的手势为预设激活手势时,根据手势图像激活交互控制响应区域;根据当前位置信息确定交互控制响应区域的目标位置。
本实施例中,提供一种计算机可读存储介质,通过识别目标区域内的手势图像激活交互控制响应区域,并根据手势图像的当前位置信息确定交互控制响应区域对应的目标位置。交互控制响应区域对应的目标位置根据识别到的手势图像在智能终端显示屏中的当前位置确定,随着手势图像当前位置的变化,交互控制响应区域的目标位置发生相应变化,便于用户对交互控制响应区域进行操作,提高了操作的便利性。
在一个实施例中,计算机程序被处理器执行时,还使得处理器执行以下步骤:获取智能终端对应的当前状态信息,根据当前状态信息确定对应的功能信息,根据功能信息生成对应的目标交互控制响应信息;根据手势图像激活目标交互控制响应信息对应的目标交互控制响应区域。
在一个实施例中,计算机程序被处理器执行时,还使得处理器执行以下步骤:检测作用于目标位置的手势操作,根据手势操作触发对应的交互控制指令;根据交互控制指令控制智能终端执行对应的操作。
在一个实施例中,计算机程序被处理器执行时,还使得处理器执行以下步骤:识别目标区域内的人脸图像,并获取人脸图像在智能终端显示屏中的第一位置信息;获取人脸图像对应的人脸图像尺寸;根据人脸图像尺寸确定当前用户与智能终端显示屏之间的距离;获取距离对应的预设距离范围,根据预设距离范围确定交互信息的目标显示尺寸;根据第一位置信息确定交互信息的目标显示位置,根据目标显示尺寸在目标显示位置显示交互信息。
在一个实施例中,计算机程序被处理器执行时,还使得处理器执行以下步骤:获取目标区域内识别到的人脸图像的数目;当人脸图像的数目为一个以上 时,获取各个人脸图像对应的当前位置信息;根据各个人脸图像的当前位置信息按照预设规则确定目标显示位置。
一种智能终端,包括一个或多个处理器,存储器以及一个或多个程序,其中,一个或多个程序被存储在存储器中并且被配置成由一个或多个处理器执行,程序用于执行以下步骤:识别目标区域内的手势图像,并获取手势图像在智能终端显示屏中对应的当前位置信息;当检测到手势图像对应的手势为预设激活手势时,根据手势图像激活交互控制响应区域;根据当前位置信息确定交互控制响应区域的目标位置。
本实施例中,提供一种智能终端,通过识别目标区域内的手势图像激活交互控制响应区域,并根据手势图像的当前位置信息确定交互控制响应区域对应的目标位置。交互控制响应区域对应的目标位置根据识别到的手势图像在智能终端显示屏中的当前位置确定,随着手势图像当前位置的变化,交互控制响应区域的目标位置发生相应变化,便于用户对交互控制响应区域进行操作,提高了操作的便利性。
在一个实施例中,一个或多个程序被存储在存储器中并且被配置成由一个或多个处理器执行,程序还用于执行以下步骤:获取智能终端对应的当前状态信息,根据当前状态信息确定对应的功能信息,根据功能信息生成对应的目标交互控制响应信息;根据手势图像激活目标交互控制响应信息对应的目标交互控制响应区域。
在一个实施例中,一个或多个程序被存储在存储器中并且被配置成由一个或多个处理器执行,程序还用于执行以下步骤:检测作用于目标位置的手势操作,根据手势操作触发对应的交互控制指令;根据交互控制指令控制智能终端执行对应的操作。
在一个实施例中,一个或多个程序被存储在存储器中并且被配置成由一个或多个处理器执行,程序还用于执行以下步骤:识别目标区域内的人脸图像,并获取人脸图像在智能终端显示屏中的第一位置信息;获取人脸图像对应的人脸图像尺寸;根据人脸图像尺寸确定当前用户与智能终端显示屏之间的距离;获取距离对应的预设距离范围,根据预设距离范围确定交互信息的目标显示尺寸;根据第一位置信息确定交互信息的目标显示位置,根据目标显示尺寸在目 标显示位置显示交互信息。
在一个实施例中,一个或多个程序被存储在存储器中并且被配置成由一个或多个处理器执行,程序还用于执行以下步骤:获取目标区域内识别到的人脸图像的数目;当人脸图像的数目为一个以上时,获取各个人脸图像对应的当前位置信息;根据各个人脸图像的当前位置信息按照预设规则确定目标显示位置。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。

Claims (12)

  1. 一种交互位置确定方法,所述方法包括:
    识别目标区域内的手势图像,并获取所述手势图像在智能终端显示屏中对应的当前位置信息;
    当检测到所述手势图像对应的手势为预设激活手势时,根据所述手势图像激活交互控制响应区域;
    根据所述当前位置信息确定所述交互控制响应区域的目标位置。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述手势图像激活交互控制响应区域的步骤包括:
    获取所述智能终端对应的当前状态信息,根据所述当前状态信息确定对应的功能信息,根据所述功能信息生成对应的目标交互控制响应信息;
    根据所述手势图像激活所述目标交互控制响应信息对应的目标交互控制响应区域。
  3. 根据权利要求1或2所述的方法,其特征在于,所述根据所述当前位置信息确定所述交互控制响应区域的目标位置的步骤之后,包括:
    检测作用于所述目标位置的手势操作,根据所述手势操作触发对应的交互控制指令;
    根据所述交互控制指令控制所述智能终端执行对应的操作。
  4. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    识别目标区域内的人脸图像,并获取所述人脸图像在所述智能终端显示屏中的第一位置信息;
    获取所述人脸图像对应的人脸图像尺寸;
    根据所述人脸图像尺寸确定当前用户与所述智能终端显示屏之间的距离;
    获取所述距离对应的预设距离范围,根据所述预设距离范围确定交互信息的目标显示尺寸;
    根据所述第一位置信息确定所述交互信息的目标显示位置,根据所述目标显示尺寸在所述目标显示位置显示所述交互信息。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述第一位置信息确定所述交互信息的目标显示位置的步骤,包括:
    获取目标区域内识别到的人脸图像的数目;
    当所述人脸图像的数目为一个以上时,获取各个人脸图像对应的当前位置信息;
    根据各个人脸图像的当前位置信息按照预设规则确定目标显示位置。
  6. 一种交互位置确定系统,其特征在于,所述系统包括:
    手势识别模块,用于识别目标区域内的手势图像,并获取所述手势图像在智能终端显示屏中对应的当前位置信息;
    激活模块,用于当检测到所述手势图像对应的手势为预设激活手势时,根据所述手势图像激活交互控制响应区域;
    目标位置确定模块,用于根据所述当前位置信息确定所述交互控制响应区域的目标位置。
  7. 根据权利要求6所述的系统,其特征在于,所述激活模块还用于获取所述智能终端对应的当前状态信息,根据所述当前状态信息确定对应的功能信息,根据所述功能信息生成对应的目标交互控制响应信息;根据所述手势图像激活所述目标交互控制响应信息对应的目标交互控制响应区域。
  8. 根据权利要求6或7所述的系统,其特征在于,所述系统还包括:
    控制模块,用于检测作用于所述目标位置的手势操作,根据所述手势操作触发对应的交互控制指令,根据所述交互控制指令控制所述智能终端执行对应的操作。
  9. 根据权利要求6所述的系统,其特征在于,所述系统还包括:
    人脸识别模块,用于识别目标区域内的人脸图像,并获取所述人脸图像在所述智能终端显示屏中的第一位置信息;
    处理模块,用于获取所述人脸图像对应的人脸图像尺寸,根据所述人脸图像尺寸确定当前用户与所述智能终端显示屏之间的距离;
    显示尺寸确定模块,用于获取所述距离对应的预设距离范围,根据所述预设距离范围确定交互信息的目标显示尺寸;
    显示模块,用于根据所述第一位置信息确定所述交互信息的目标显示位置,根据所述目标显示尺寸在所述目标显示位置显示所述交互信息。
  10. 根据权利要求9所述的系统,其特征在于,所述人脸识别模块,还用 于获取目标区域内识别到的人脸图像的数目,当所述人脸图像的数目为一个以上时,获取各个人脸图像对应的当前位置信息;
    所述显示模块,还用于根据各个人脸图像的当前位置信息按照预设规则确定目标显示位置。
  11. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时,使得所述处理器执行权利要求1至5中任一项所述方法的步骤。
  12. 一种智能终端,其特征在于,包括一个或多个处理器,存储器以及一个或多个程序,其中所述一个或多个程序被存储在所述存储器中并且被配置成由所述一个或多个处理器执行,所述程序用于执行权利要求1至5任一项所述方法的步骤。
PCT/CN2018/099219 2017-08-14 2018-08-07 交互位置确定方法、系统、存储介质和智能终端 WO2019033957A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2020508377A JP2020530631A (ja) 2017-08-14 2018-08-07 インタラクション位置決定方法、システム、記憶媒体、およびスマートデバイス
US16/791,737 US11163426B2 (en) 2017-08-14 2020-02-14 Interaction position determination method and system, storage medium and smart terminal

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710691663.6 2017-08-14
CN201710691663.6A CN107493495B (zh) 2017-08-14 2017-08-14 交互位置确定方法、系统、存储介质和智能终端

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/791,737 Continuation US11163426B2 (en) 2017-08-14 2020-02-14 Interaction position determination method and system, storage medium and smart terminal

Publications (1)

Publication Number Publication Date
WO2019033957A1 true WO2019033957A1 (zh) 2019-02-21

Family

ID=60645282

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/099219 WO2019033957A1 (zh) 2017-08-14 2018-08-07 交互位置确定方法、系统、存储介质和智能终端

Country Status (4)

Country Link
US (1) US11163426B2 (zh)
JP (1) JP2020530631A (zh)
CN (1) CN107493495B (zh)
WO (1) WO2019033957A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111080537A (zh) * 2019-11-25 2020-04-28 厦门大学 水下机器人智能控制方法、介质、设备及系统
CN111645701A (zh) * 2020-04-30 2020-09-11 长城汽车股份有限公司 一种车辆控制方法、装置及系统

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107493495B (zh) 2017-08-14 2019-12-13 深圳市国华识别科技开发有限公司 交互位置确定方法、系统、存储介质和智能终端
CN108446072B (zh) * 2018-02-27 2021-01-05 口碑(上海)信息技术有限公司 手势识别反馈方法及装置
CN110109364A (zh) * 2019-03-25 2019-08-09 深圳绿米联创科技有限公司 设备控制方法、装置、摄像机以及存储介质
CN110197171A (zh) * 2019-06-06 2019-09-03 深圳市汇顶科技股份有限公司 基于用户的动作信息的交互方法、装置和电子设备
CN110582014A (zh) * 2019-10-17 2019-12-17 深圳创维-Rgb电子有限公司 电视机及其电视控制方法、控制装置和可读存储介质
CN112882563A (zh) * 2019-11-29 2021-06-01 中强光电股份有限公司 触控投影系统与其方法
CN111142655A (zh) * 2019-12-10 2020-05-12 上海博泰悦臻电子设备制造有限公司 交互方法、终端及计算机可读存储介质
CN113448427B (zh) 2020-03-24 2023-09-12 华为技术有限公司 设备控制方法、装置及系统
CN111538420B (zh) * 2020-04-22 2023-09-29 掌阅科技股份有限公司 电子书页面的显示方法、电子设备及计算机存储介质
CN112068698A (zh) * 2020-08-31 2020-12-11 北京市商汤科技开发有限公司 一种交互方法、装置及电子设备、计算机存储介质
CN112351325B (zh) * 2020-11-06 2023-07-25 惠州视维新技术有限公司 基于手势的显示终端控制方法、终端和可读存储介质
CN112363621B (zh) * 2020-11-13 2024-05-14 北京达佳互联信息技术有限公司 一种终端控制方法、装置、电子设备及存储介质
WO2022160085A1 (zh) * 2021-01-26 2022-08-04 京东方科技集团股份有限公司 控制方法、电子设备及存储介质
JP7484756B2 (ja) 2021-02-05 2024-05-16 株式会社デンソー 表示システム
CN113010018B (zh) * 2021-04-20 2022-09-20 歌尔股份有限公司 交互控制方法、终端设备及存储介质
CN115379001B (zh) * 2021-05-06 2023-11-17 云米互联科技(广东)有限公司 智能设备位置的展示控制方法及装置
CN113208373A (zh) * 2021-05-20 2021-08-06 厦门希烨科技有限公司 一种智能化妆镜的控制方法和智能化妆镜
CN113253847B (zh) * 2021-06-08 2024-04-30 北京字节跳动网络技术有限公司 终端的控制方法、装置、终端和存储介质
CN113622911A (zh) * 2021-08-06 2021-11-09 北斗天地股份有限公司 掘进机控制方法及系统、可移动智能终端和掘进机
CN113419665B (zh) * 2021-08-25 2021-11-16 腾讯科技(深圳)有限公司 一种应用显示方法、相关装置及设备
CN113696904B (zh) * 2021-08-27 2024-03-05 上海仙塔智能科技有限公司 基于手势控制车辆的处理方法、装置、设备与介质
CN113747078B (zh) * 2021-09-18 2023-08-18 海信视像科技股份有限公司 显示设备及焦距控制方法
CN116974435A (zh) * 2022-04-24 2023-10-31 中兴通讯股份有限公司 操作界面的生成方法、控制方法和装置
CN115202530B (zh) * 2022-05-26 2024-04-09 当趣网络科技(杭州)有限公司 一种用户界面的手势交互方法和系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103686269A (zh) * 2012-09-24 2014-03-26 Lg电子株式会社 图像显示装置及其操作方法
CN104914982A (zh) * 2014-03-12 2015-09-16 联想(北京)有限公司 一种电子设备的控制方法和装置
US20160170491A1 (en) * 2014-12-12 2016-06-16 Alpine Electronics, Inc. Gesture assistive zoomable selector for screen
CN106896907A (zh) * 2015-12-21 2017-06-27 东莞酷派软件技术有限公司 一种根据用户手势操作终端的方法及装置
CN107493495A (zh) * 2017-08-14 2017-12-19 深圳市国华识别科技开发有限公司 交互位置确定方法、系统、存储介质和智能终端

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3847753B2 (ja) * 2004-01-30 2006-11-22 株式会社ソニー・コンピュータエンタテインメント 画像処理装置、画像処理方法、記録媒体、コンピュータプログラム、半導体デバイス
JP4702959B2 (ja) * 2005-03-28 2011-06-15 パナソニック株式会社 ユーザインタフェイスシステム
CN101030321B (zh) * 2006-03-01 2011-05-11 松下电器产业株式会社 遥控器、影像设备、遥控方法及系统
US8726194B2 (en) * 2007-07-27 2014-05-13 Qualcomm Incorporated Item selection using enhanced control
WO2009125481A1 (ja) * 2008-04-10 2009-10-15 パイオニア株式会社 画面表示システム及び画面表示プログラム
JP2009265709A (ja) * 2008-04-22 2009-11-12 Hitachi Ltd 入力装置
CN101437124A (zh) * 2008-12-17 2009-05-20 三星电子(中国)研发中心 面向电视控制的动态手势识别信号处理方法
US9639744B2 (en) * 2009-01-30 2017-05-02 Thomson Licensing Method for controlling and requesting information from displaying multimedia
US8773355B2 (en) * 2009-03-16 2014-07-08 Microsoft Corporation Adaptive cursor sizing
KR20110003146A (ko) * 2009-07-03 2011-01-11 한국전자통신연구원 제스쳐 인식 장치, 이를 구비한 로봇 시스템 및 이를 이용한 제스쳐 인식 방법
JP5659510B2 (ja) * 2010-03-10 2015-01-28 ソニー株式会社 画像処理装置、画像処理方法及びプログラム
JP2012058838A (ja) * 2010-09-06 2012-03-22 Sony Corp 画像処理装置、プログラム及び画像処理方法
CN102184014B (zh) * 2011-05-12 2013-03-20 浙江大学 基于移动设备指向的智能家电交互控制方法及装置
CN102508655B (zh) * 2011-10-12 2014-06-18 宇龙计算机通信科技(深圳)有限公司 桌面图标展示方法及系统
JP5957893B2 (ja) * 2012-01-13 2016-07-27 ソニー株式会社 情報処理装置及び情報処理方法、並びにコンピューター・プログラム
CN102831404B (zh) * 2012-08-15 2016-01-13 深圳先进技术研究院 手势检测方法及系统
US9671874B2 (en) * 2012-11-08 2017-06-06 Cuesta Technology Holdings, Llc Systems and methods for extensions to alternative control of touch-based devices
JP6439398B2 (ja) * 2014-11-13 2018-12-19 セイコーエプソン株式会社 プロジェクター、及び、プロジェクターの制御方法
EP3267289B1 (en) * 2016-07-05 2019-02-27 Ricoh Company, Ltd. Information processing apparatus, position information generation method, and information processing system
CN106547468A (zh) * 2016-11-09 2017-03-29 珠海格力电器股份有限公司 显示控制方法、显示控制装置及终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103686269A (zh) * 2012-09-24 2014-03-26 Lg电子株式会社 图像显示装置及其操作方法
CN104914982A (zh) * 2014-03-12 2015-09-16 联想(北京)有限公司 一种电子设备的控制方法和装置
US20160170491A1 (en) * 2014-12-12 2016-06-16 Alpine Electronics, Inc. Gesture assistive zoomable selector for screen
CN106896907A (zh) * 2015-12-21 2017-06-27 东莞酷派软件技术有限公司 一种根据用户手势操作终端的方法及装置
CN107493495A (zh) * 2017-08-14 2017-12-19 深圳市国华识别科技开发有限公司 交互位置确定方法、系统、存储介质和智能终端

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111080537A (zh) * 2019-11-25 2020-04-28 厦门大学 水下机器人智能控制方法、介质、设备及系统
CN111080537B (zh) * 2019-11-25 2023-09-12 厦门大学 水下机器人智能控制方法、介质、设备及系统
CN111645701A (zh) * 2020-04-30 2020-09-11 长城汽车股份有限公司 一种车辆控制方法、装置及系统
CN111645701B (zh) * 2020-04-30 2022-12-06 长城汽车股份有限公司 一种车辆控制方法、装置及系统

Also Published As

Publication number Publication date
US20200183556A1 (en) 2020-06-11
CN107493495B (zh) 2019-12-13
US11163426B2 (en) 2021-11-02
JP2020530631A (ja) 2020-10-22
CN107493495A (zh) 2017-12-19

Similar Documents

Publication Publication Date Title
WO2019033957A1 (zh) 交互位置确定方法、系统、存储介质和智能终端
CN106406710B (zh) 一种录制屏幕的方法及移动终端
EP3661187B1 (en) Photography method and mobile terminal
US20240137462A1 (en) Display apparatus and control methods thereof
US9213436B2 (en) Fingertip location for gesture input
WO2017113668A1 (zh) 一种根据眼部动作对终端进行控制的方法及装置
WO2019001152A1 (zh) 拍照方法及移动终端
US10860857B2 (en) Method for generating video thumbnail on electronic device, and electronic device
CN108616712B (zh) 一种基于摄像头的界面操作方法、装置、设备及存储介质
JP2013069224A (ja) 動作認識装置、動作認識方法、操作装置、電子機器、及び、プログラム
WO2012011614A1 (en) Information device, control method thereof and system
JP2004246578A (ja) 自己画像表示を用いたインタフェース方法、装置、およびプログラム
CN112068698A (zh) 一种交互方法、装置及电子设备、计算机存储介质
CN112073764A (zh) 一种显示设备
US9400575B1 (en) Finger detection for element selection
JP2012238293A (ja) 入力装置
CN113253908A (zh) 按键功能执行方法、装置、设备及存储介质
US9350918B1 (en) Gesture control for managing an image view display
US11756302B1 (en) Managing presentation of subject-based segmented video feed on a receiving device
US9898183B1 (en) Motions for object rendering and selection
CN113613053B (zh) 视频推荐方法、装置、电子设备及存储介质
US20200257396A1 (en) Electronic device and control method therefor
US20230388447A1 (en) Subject-based smart segmentation of video feed on a transmitting device
TW201925989A (zh) 互動系統
RU2699392C1 (ru) Распознавание одно- и двухмерных штрихкодов операцией "потяни-для-сканирования (pull-to-scan)"

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18846027

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020508377

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24.09.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18846027

Country of ref document: EP

Kind code of ref document: A1