WO2024040861A1 - Procédé et système de commande d'autorisation d'opération, dispositif électronique et support de stockage - Google Patents

Procédé et système de commande d'autorisation d'opération, dispositif électronique et support de stockage Download PDF

Info

Publication number
WO2024040861A1
WO2024040861A1 PCT/CN2023/071308 CN2023071308W WO2024040861A1 WO 2024040861 A1 WO2024040861 A1 WO 2024040861A1 CN 2023071308 W CN2023071308 W CN 2023071308W WO 2024040861 A1 WO2024040861 A1 WO 2024040861A1
Authority
WO
WIPO (PCT)
Prior art keywords
candidate
front face
face
target
coefficient
Prior art date
Application number
PCT/CN2023/071308
Other languages
English (en)
Chinese (zh)
Inventor
王淼军
杨瑞朋
曹阳
Original Assignee
湖北星纪魅族科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 湖北星纪魅族科技有限公司 filed Critical 湖北星纪魅族科技有限公司
Publication of WO2024040861A1 publication Critical patent/WO2024040861A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Definitions

  • This application relates to the field of intelligent control technology, such as control methods, control systems, electronic devices and storage media for operating authority.
  • a set of external devices for example, a set of mouse and keyboard
  • KVM Keyboard Video Mouse
  • This application provides an operation authority control method, control system, electronic device and storage medium.
  • an embodiment of the present application provides a method for controlling operation permissions, wherein the method includes:
  • the target device is determined in the candidate device set according to the front face coefficient of each facial image; wherein the candidate device set includes at least two candidate devices, the front face coefficient is used to determine the candidate device for which the target object is oriented, and the candidate device for which the target object is oriented.
  • the device is identified as the target device;
  • an embodiment of the present application provides an operation authority control device, wherein the device includes:
  • An image acquisition module configured to acquire the facial image of the target object collected by each image acquisition device
  • the device determination module is configured to determine the target device in the candidate device set according to the front face coefficient of each facial image; wherein the candidate device set includes at least two candidate devices, and the front face coefficient is used to determine the candidate device toward which the target object is oriented, Determine the candidate device toward which the target object is oriented as the target device;
  • the permission switching module is configured to switch the operation target of the external device to the target device, so as to operate and control the target device through the external device.
  • an embodiment of the present application provides an operation authority control system, which includes: an external device, a controller, at least two image acquisition devices, and at least two candidate devices; wherein the external device and The controller is electrically connected; the controller is electrically connected to each candidate device and each image acquisition device through an expansion connection line; at least two candidate devices are in one-to-one correspondence with at least two image acquisition devices;
  • the controller is configured to obtain the facial image of the target object collected by each image acquisition device, and determine the target device among the candidate devices according to the front face coefficient of each facial image, and the front face coefficient is used to determine the candidate device toward which the target object is oriented. , determine the candidate device facing the target object as the target device, and switch the operation target of the external device to the target device, so as to operate and control the target device through the external device.
  • an electronic device which includes:
  • a memory communicatively connected to at least one processor; wherein,
  • the memory stores a computer program that can be executed by at least one processor, and the computer program is executed by at least one processor, so that at least one processor can execute an operation authority control method according to any embodiment of the present application.
  • embodiments of the present application provide a computer-readable storage medium.
  • the computer-readable storage medium stores computer instructions.
  • the computer instructions are configured to enable a processor to implement any embodiment of the present application when executed.
  • Figure 1 is a flow chart of an operation authority control method provided according to an embodiment of the present application.
  • Figure 2 is a flow chart of an operation authority control method provided according to an embodiment of the present application.
  • Figure 3 is a flow chart of an operation authority control method provided according to an embodiment of the present application.
  • Figure 4 is a schematic diagram of human face orientation provided according to an embodiment of the present application.
  • Figure 5 is a schematic diagram of facial image segmentation provided according to an embodiment of the present application.
  • Figure 6 is a flow chart of a front face coefficient calculation method provided according to an embodiment of the present application.
  • Figure 7 is a schematic structural diagram of an operation authority control device provided according to an embodiment of the present application.
  • Figure 8 is a schematic structural diagram of an operation authority control system provided according to an embodiment of the present application.
  • Figure 9 is a schematic structural diagram of an operation authority control system provided according to an embodiment of the present application.
  • Figure 10 is an example diagram of an operation authority control system provided according to an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of an electronic device that implements an operation authority control method according to an embodiment of the present application.
  • the electronic device in the embodiment of the present application is a device with wireless transceiver function.
  • the electronic device can transmit data via a Wireless Local Area Network (Wireless Local Area Network, WLAN) or a cellular network.
  • Cellular network communicates with the core network and exchanges voice and/or data with the Radio Access Network (Radio Access Network, RAN).
  • the electronic device can be deployed on land, including indoors or outdoors, handheld or vehicle-mounted; it can also be deployed on water (such as ships, etc.); it can also be deployed in the air (such as aircraft, balloons, satellites, etc.).
  • the electronic device may be a mobile phone (mobile phone), a tablet computer (pad), a computer with wireless transceiver functions, a virtual reality (VR) terminal, an augmented reality (AR) terminal, or an industrial control (industrial control) ), wireless terminals in self-driving, wireless terminals in remote medical, wireless terminals in smart grid, and wireless terminals in transportation safety , wireless terminals in smart cities, wireless terminals in smart homes, etc.
  • a mobile phone mobile phone
  • a tablet computer pad
  • a computer with wireless transceiver functions a virtual reality (VR) terminal, an augmented reality (AR) terminal, or an industrial control (industrial control)
  • VR virtual reality
  • AR augmented reality
  • industrial control industrial control
  • the electronic device may include, for example, user equipment (UE), wireless terminal equipment, mobile terminal equipment, device-to-device communication (D2D) terminal equipment, vehicle-to-everything (V2X) ) terminal equipment, machine-to-machine/machine-type communications (M2M/MTC) terminal equipment, Internet of things (Intemet of things, IoT) terminal equipment, subscriber unit (subscriber unit), subscriber Station (subscriber station), mobile station (mobile station), remote station (remote station), access point (AP), remote terminal (remote terminal), customer premise equipment (CPE), fixed wireless access Fixed wireless access (FWA), access terminal (access terminal), user terminal (user terminal), user agent (user agent), or user device (user device), etc.
  • IoT Internet of things
  • IoT Internet of things
  • subscriber unit subscriber unit
  • subscriber Station subscriber Station
  • mobile station mobile station
  • remote station remote station
  • access point AP
  • remote terminal remote terminal
  • CPE customer premise equipment
  • Fixed wireless access
  • this may include a mobile phone (or "cellular" phone), a computer with a mobile terminal device, a portable, pocket-sized, handheld, computer-built-in mobile device, etc.
  • PCS personal communication service
  • SIP session initiation protocol
  • WLL wireless local loop
  • PDA personal digital assistant
  • constrained devices such as devices with lower power consumption, or devices with limited storage capabilities, or devices with limited computing capabilities. For example, they include barcodes, radio frequency identification (RFID), sensors, global positioning systems (GPS), laser scanners and other information sensing equipment.
  • RFID radio frequency identification
  • GPS global positioning systems
  • FIG. 1 is a flow chart of a method for controlling operation permissions provided according to an embodiment of the present application. This embodiment can automatically switch the operation permissions of external devices.
  • the method can be composed of an operation permission control method.
  • the operation authority control device can be implemented in the form of hardware and/or software, and the operation authority control device can be configured in an electronic device.
  • the electronic device may be a controller. As shown in Figure 1, the method includes:
  • the image acquisition device is used to acquire images of the target object.
  • the type of image acquisition device is not limited.
  • the image acquisition device may include but is not limited to equipment such as a camera, an infrared image sensor, or an infrared camera.
  • the image acquisition device can be installed in any position with an unobstructed view, as long as it can accurately capture the facial image of the target object.
  • the above image acquisition device can be independent, such as an independent camera, infrared Image sensors or infrared camera equipment, etc., can also be integrated into candidate devices through integration.
  • multiple image acquisition devices can be evenly arranged around the target object. It can be understood that the orientation of the target object in the facial image collected by each image collection device is different.
  • the facial image may refer to the facial image of the target object.
  • the facial image may include a front face image, a side face image, etc. of the target object.
  • an image is collected of the target object's face through an image acquisition device as a facial image of the target object.
  • each image acquisition device can actively report the collected facial image of the target object; or, the controller actively pulls the facial image of the target object collected by each image acquisition device without performing any processing. limited.
  • each image acquisition device when each image acquisition device actively reports the collected facial image of the target object, each image acquisition device can report the collected facial image to the controller at the same time, or according to the preset The reporting sequence reports the collected images to the controller in sequence.
  • the controller actively pulls the facial image of the target object collected by each image collection device
  • the controller can actively obtain the facial image collected by each image collection device, and can simultaneously obtain the facial image of the target object collected by each image collection device.
  • the collected facial images can also be acquired sequentially in sequence by each collecting device.
  • the preset reporting order may be the reporting order determined based on the serial number sorting of the image acquisition devices.
  • S120 Determine the target device in the candidate device set according to the front face coefficient of each facial image; wherein the candidate device set includes at least two candidate devices, and the front face coefficient is used to determine the candidate device toward which the target object is oriented, and orient the target object toward The candidate device is determined as the target device.
  • the frontal face coefficient is used to characterize the frontal orientation of the target object's face relative to each candidate device.
  • the frontal face coefficient may be determined based on the area of the left face area and the area of the right face area in each facial image.
  • the calculation method of the frontal face coefficient can include using the ratio of the area of the left face area to the area of the right face minus 1. The smaller the value of the frontal face coefficient, the higher the degree that the face is facing the candidate device. On the contrary, it can be considered that the frontal face coefficient is oriented toward the candidate device. The lower the degree to which the person's face is facing the candidate device.
  • the candidate device set may refer to a set of all candidate devices connected to a controller; the candidate devices may include computers or servers and other electronic devices with functions such as data storage and data calculation. In actual operation, there is no limit on the number of candidate devices, as long as the number of candidate devices connected to each controller is at least two. That is, the candidate device set includes at least two candidate devices.
  • the target device refers to the device in the candidate device set that has the highest degree of frontal orientation to the target object. It can be understood that based on the frontal coefficient of each facial image, all candidate devices in the candidate device set are screened to obtain the candidate device with the highest degree of frontal orientation to the target object as the target device. In an embodiment, there may be at least two candidate devices included in the candidate device set.
  • the candidate devices and the image acquisition devices may have a one-to-one correspondence, that is to say, the number of candidate devices is the same as the number of image acquisition devices, and each candidate device may correspond to an image acquisition device.
  • the facial images of the target object collected by the image acquisition device corresponding to each candidate device in the candidate device set are different.
  • the front face coefficients corresponding to different facial images can be different.
  • the front face coefficients corresponding to each candidate device can be compared. Determine the target device. It can be understood that the frontal face coefficient can be a result calculated from the facial images corresponding to each candidate device.
  • the target device can be determined in the candidate device set. For example, the candidate device corresponding to the facial image with the smallest front face coefficient value may be determined as the target device.
  • the operation target may refer to a target for which the external device issued by the controller has control authority.
  • the external device can control the target device.
  • the controller can switch the operation target to the target device.
  • the controller can The external device operates and controls the target device.
  • the facial image of the target object is collected through the image acquisition device, and the corresponding frontal face coefficient is confirmed based on the collected facial image.
  • Candidate devices can be screened according to the frontal face coefficient to obtain the corresponding target device, and the operation target of the external device is Switch to the target device to operate and control the target device through the external device, automatically switching the operating permissions of the external device according to the target object's facial orientation, improving the convenience of switching the operating permissions of the external device, and improving the user experience.
  • the method further includes: obtaining historical cursor data of the target device; determining the display of the external device corresponding to the target device according to the historical cursor data. The cursor start position on the screen.
  • the historical cursor data may refer to the position where the external device stayed on the target device when the target device last obtained the operation permission of the external device.
  • historical cursor data may be stored in the controller.
  • the cursor starting position may refer to the initial stay position of the cursor of the external device on the screen, and the cursor starting position may stay at any position on the screen.
  • the controller can extract the historical cursor data stored locally on the target device, and can load the cursor start position of the display screen corresponding to the target device according to the historical cursor data. After the controller extracts the historical cursor data stored locally on the target device, it can read the cursor position corresponding to the external device when the target device last obtained the operating permission of the external device. The controller can control the external device to move on the target device.
  • the starting position of the cursor displayed on is the position where the cursor stayed when the operation permission of the external device was last obtained.
  • the method further includes: initializing the cursor starting position of the external device to the center position of the display screen corresponding to the target device.
  • the cursor start position of the external device when the historical cursor data of the target device does not exist locally in the controller, the cursor start position of the external device can be initialized. After the cursor start position is initialized, the cursor start position can be initialized to the default position.
  • the default position can be the preset position. set. During actual operation, the default position can be set to the center of the screen. That is to say, when the historical cursor data of the target device does not exist locally in the controller, the cursor starting position of the external device can be initialized so that the cursor starting position automatically moves to the center of the display screen corresponding to the target device.
  • the method further includes: sending a turn-on instruction to the status indicator light of the target device to light the status indicator light.
  • the status indicator light can be used to indicate whether the target device has obtained operation permission.
  • the status indicator light can display different states according to different control states of the target device. For example, when the target device obtains the control permission of the external device, the status indicator light of the target device may be in the on state; when the target device does not obtain the control permission of the external device, the indicator light of the target device may be in the off state.
  • the status indicator light and the candidate device can have a one-to-one correspondence. That is to say, each candidate device is connected to a status indicator light. When the candidate device is determined as the target device, its corresponding status The indicator light can change the display status accordingly.
  • the controller after the controller determines the target device based on the front face coefficient of each facial image and the set of candidate devices, it can control the status indicator light corresponding to the target device to change the display state.
  • the controller can send a turn-on command to the status indicator light of the target device, and the status indicator light adjusts the display state to a lit state according to the turn-on command sent by the controller.
  • FIG. 2 is a flow chart of a method for controlling operation permissions provided according to an embodiment of the present application. This embodiment describes a method for controlling operation permissions based on the above embodiment. As shown in Figure 2, the method includes:
  • the image acquisition instruction may be used to instruct the image acquisition device to start acquiring images.
  • the image acquisition instruction may be in the form of a binary number, so that the image acquisition device performs the action of acquiring a facial image of the target object.
  • the time when the image acquisition instruction is sent can be the time when the system is started. It can be understood that at the same time as the system is started, the controller can send the image acquisition instruction to each image acquisition device.
  • the controller may send image acquisition instructions to each image acquisition device at the same time; or, the controller may send image acquisition instructions to each image acquisition device in sequence.
  • the controller may send an image collection instruction to each image collection device immediately after the system is started, so that each image collection device collects facial images from the target object.
  • S220 Use a timed polling method to sequentially acquire the facial images of the target object collected by each image acquisition device.
  • scheduled polling may refer to the process of sequentially querying according to a preset preset time period.
  • Timing can be understood as a fixed time for polling, and the preset duration can be pre-configured in the controller by the manufacturer based on experience.
  • the controller may poll each image collection device every preset period of time to obtain the facial image of the target object captured by each image collection device.
  • the controller can sequentially acquire the facial images of the target object collected by each image acquisition device.
  • the acquisition method can be a timed polling method.
  • the timer polling can be in accordance with the preset sequence of the image acquisition devices. ongoing. That is to say, the controller can periodically poll each image acquisition device according to the preset sequence of the image acquisition devices, and sequentially acquire the facial images of the target object collected by each image acquisition device.
  • the preset time period may not be limited, and may It is set by the manufacturer based on experience. In order to ensure response speed, the default duration can be set to milliseconds.
  • the preset time length stored by the controller may include but is not limited to 5 milliseconds, 10 milliseconds, or 100 milliseconds, etc., and the preset time length may also be a time range.
  • the front face coefficient can be calculated correspondingly for each facial image, and the front face coefficient can be determined based on the area of the left face area and the area of the right face area.
  • the calculation method of the left face area area and the right face area area value is not limited here.
  • the pixel points of the left face area and the right face area can be used to quantify, that is, the pixel points in the area are quantified.
  • the number is calculated, and the number of pixels in the area is used as the quantified value of the area.
  • the calculation method of the front face coefficient value can be by subtracting 1 from the ratio of the area of the left face area to the area of the right face area, and taking the absolute value of the calculated value to ensure that the front face coefficient value is a positive number.
  • the front face coefficients can be summarized into a set of front face coefficients.
  • S230 includes S2301-S2305:
  • the initial face area may refer to the face area extracted from the facial image of the target object according to the facial contour. It is a closed-loop area of the face boundary and may include various organs of the face. In one embodiment, the initial face area may be obtained by filtering out the background area based on the facial image.
  • the facial image may include an initial face area and a background area excluding the initial face area
  • the controller may identify the initial face area, form a facial outline, and extract the initial face area based on the facial outline.
  • S2302. Determine the dividing line between the left and right faces based on the pre-obtained eye distance center position and mouth center position.
  • the distance between the eyes refers to the distance between the center points of the two eyes.
  • the center position of the eye distance can refer to the center position between the center point of the left eye and the center point of the right eye; the center position of the mouth can refer to the left corner of the mouth and the right corner of the mouth. Center point location.
  • a two-dimensional coordinate system can be established on the facial image, and the eye distance center position and mouth center position are determined according to the coordinate positions of the coordinate system.
  • the order in which the eye distance center position and the mouth center position are determined is not limited.
  • the bottom edge of the facial image can be used as the horizontal axis
  • the left side of the facial image can be used as the vertical axis
  • a two-dimensional coordinate system can be established to confirm the center position of the eye distance and the center position of the mouth.
  • the dividing line between the left and right faces may refer to the dividing line between the left and right faces, and may include the central axis of the face. The method of confirming the dividing line between the left and right faces is not limited.
  • the center position of the eye distance can be determined by the center position of the left eye and the center position of the right eye, and the center position of the mouth can be confirmed by the position of the left corner of the mouth and the position of the right corner of the mouth.
  • the controller can establish a two-dimensional coordinate system on the facial image to obtain the center position of the left eye, the center position of the right eye, the position of the left corner of the mouth, and the position of the right corner of the mouth, and determine the two eyes through the center position of the left eye and the center position of the right eye.
  • the center point between the center positions is used as the center position of the eye distance, and the center point of the mouth is passed through the center point between the left mouth corner position and the right mouth corner position.
  • the center position of the distance between the eyes and the center position of the mouth can be connected and an extension line can be made to determine the dividing line between the left and right faces.
  • the left face area may refer to the left side of the face area after segmenting the initial face area according to the left and right face dividing lines;
  • the right side face area may refer to the right side of the initial face area after segmenting according to the left and right face dividing lines. face area.
  • the corresponding left face area and right face area can be obtained.
  • S2303 includes: determining the first collection boundary line of the initial face area based on the center positions of the two eyes; determining the second collection boundary line of the initial face area based on the positions of the two corners of the mouth; determining the first collection boundary line and the second collection boundary line based on the two mouth corners. 2. Collect the boundary line to intercept the effective face area in the initial face area; segment the effective face area according to the left and right face dividing lines to obtain the corresponding left face area and right face area.
  • the first collection boundary line may refer to the first boundary line of the effective face area. In an embodiment, the first collection boundary line may be determined based on the center positions of both eyes.
  • the second collection boundary line may refer to the second boundary line of the effective face area. In the embodiment, the second collection boundary line may be determined based on the positions of the two corners of the mouth.
  • the effective face area may be the actual detection area of the left face area and the right face area. In actual operations, the effective face area may be intercepted based on the first collection boundary line and the second collection boundary line.
  • the effective face area can be set by the manufacturer based on experience, and can include the face area below the center of the two eyes and above the corners of the mouth.
  • the collection boundary line may be preset to intercept the initial face area, and the preset collection boundary line may include a first collection boundary line and a second collection boundary line.
  • the first collection boundary line can be the upper boundary line that intercepts the effective face area, which can be determined based on the center positions of the two eyes.
  • the center point of the left eye and the center point of the right eye can be connected and extended as the first collection boundary line;
  • the second collection boundary line can be the lower boundary line that intercepts the effective face area, which can be determined based on the positions of the two corners of the mouth.
  • the left mouth corner point and the right mouth corner point can be connected and extended as the second collection boundary line.
  • the first collection boundary line and the second acquisition boundary line intercepts the effective face area in the initial face area.
  • the effective face area can be divided according to the dividing line between left and right faces.
  • the dividing line between the left and right faces can be generated by connecting the center point between the center point of the left eye and the center point of the right eye and the center point between the left corner point of the mouth and the right corner point of the mouth.
  • the segmentation based on the dividing line between the left and right faces is valid.
  • the face area obtains the corresponding left face area and right face area.
  • the method of confirming the area of the left face area and the area of the right face area may not be limited.
  • the area of the left face area and the area of the right face area can be quantified based on the pixels of the left face area and the right face area.
  • the number of pixels in the left face area and the right face area can be calculated respectively, and the number of pixels in the area is used as the quantified value of the left face area and the right face area.
  • the front face coefficient of the corresponding facial image can be determined based on the area of the left face area and the area of the right face area.
  • the calculation method of the front face coefficient value may include using the ratio of the area of the left face area to the area of the right face area. Subtract 1 and take the absolute value of the calculated value to ensure that the front face coefficient value is a positive number to determine the front face coefficient of each facial image.
  • the opposite front face coefficient can be calculated for each facial image, and the front face coefficients of each facial image can be summarized into a corresponding set of front face coefficients.
  • S240 Filter the set of frontal face coefficients according to the preconfigured frontal face coefficient threshold value to obtain a corresponding set of effective frontal face coefficients.
  • the frontal face coefficient threshold value may refer to the critical value for distinguishing effective frontal face coefficients.
  • the value of the front-facing coefficient threshold value does not need to be limited. It can be set by the manufacturer based on experience, and it only needs to be able to determine whether the front-facing coefficient is valid or not.
  • the set of valid frontal face coefficients may refer to a set of valid frontal face coefficients filtered according to the frontal face coefficient threshold.
  • the effective frontal face coefficient may refer to the frontal face coefficient when there is a valid face area in the facial image. In the embodiment, when there is a human face in the facial image, the corresponding frontal face coefficient can be considered to be valid; when there is no human face in the facial image, the corresponding frontal face coefficient can be considered to be invalid.
  • the front face coefficient threshold value may be pre-configured for filtering valid front face coefficients, and the value of the front face coefficient threshold value may not be limited.
  • the front face coefficient threshold value may include 0, -1, etc. It can be set that when the front face coefficient is greater than the front face coefficient threshold value, the front face coefficient is an effective front face coefficient. Screen the valid frontal coefficients from the frontal coefficient set according to the preconfigured frontal coefficient threshold. The frontal coefficients in the frontal coefficient set can be compared one by one with the frontal coefficient threshold. When the frontal coefficient is greater than the threshold value, the frontal face coefficient is considered to be valid. After all valid frontal face coefficients are obtained, the valid frontal face coefficients can be summarized into a frontal face coefficient set.
  • the front face coefficient can correspond to the image acquisition device, and the effective image acquisition device can be filtered according to the set of effective front face coefficients.
  • the image acquisition device and the candidate device can be in one-to-one correspondence, and the candidate device can be selected according to the effective image acquisition device.
  • Set screening is performed to obtain a corresponding set of valid candidate devices, and the target device can be determined in the set of valid candidate devices based on the set of valid frontal coefficients.
  • the effective candidate device corresponding to the effective front face coefficient with the smallest value may be preset as the target device.
  • the values of the frontal coefficients in the set of effective frontal coefficients are different. You can compare the values of the effective frontal coefficients in the set of effective frontal coefficients, obtain the effective frontal coefficient with the smallest value, and determine the candidate device corresponding to the effective frontal coefficient with the smallest value. for the target device.
  • determining the target device in the candidate device set according to the effective front face coefficient set includes: determining the corresponding effective image acquisition device set according to the effective front face coefficient; and according to each effective image acquisition device in the effective image acquisition device set.
  • the corresponding expansion connection line filters the candidate device set to obtain the corresponding effective device set; the corresponding target device is determined according to the frontal coefficient corresponding to each device in the effective device set.
  • the type of the expansion connection cable is not limited, as long as it can enable the controller to control and expand the candidate device.
  • the expansion connection cable can include a serial universal bus (Universal Serial Bus, USB), etc.
  • the corresponding set of effective image acquisition devices can be determined according to the set of effective frontal face coefficients.
  • the effective frontal face coefficients and the facial images collected by the image acquisition device can be in one-to-one correspondence.
  • the effective image acquisition device can be determined.
  • the effective image acquisition devices can be summarized into a set of effective image acquisition devices.
  • the image acquisition device and the candidate device may have a one-to-one correspondence.
  • the image acquisition device and the candidate device may be connected through an expansion connection line.
  • the set of candidate devices may be Filter to obtain the corresponding set of valid devices.
  • Each device in the set of valid devices can correspond to an effective front-facing coefficient. By judging the value of each effective front-facing coefficient, the effective front-facing coefficient with the smallest value can be obtained, and the effective front-facing coefficient with the smallest value can be determined.
  • the valid candidate device corresponding to the front face coefficient is the target device.
  • S260 Automatically switch the operation permission of the external device to the target device, so as to operate and control the target device through the external device.
  • the preconfigured front face coefficient gate Use the limit value to filter the frontal coefficient set to obtain the effective frontal coefficient, eliminate competition for control rights from non-valid candidate devices, and determine the target device based on the effective frontal coefficient set and the candidate device set, so that the target device can be judged more accurately. It improves the accuracy of controlling operation permissions and improves the user experience.
  • FIG. 3 is a flow chart of a method for controlling operation permissions provided according to an embodiment of the present application. This embodiment is based on the above embodiment, taking the image acquisition device as a camera as an example to illustrate the method of controlling operation authority. As shown in Figure 3, the method includes:
  • Each camera collects facial images of the target object.
  • the controller uses scheduled task polling (to ensure real-time response speed, set scheduled tasks at the millisecond level) to control each expansion camera to collect the facial image of the target object.
  • Each camera transmits the collected user avatar to the controller. Based on the facial image of the target object collected by the camera, the controller first identifies the initial face information in the facial image and calculates the face frontal coefficient.
  • the corresponding candidate devices After calculating the front face coefficients of the facial images collected by all cameras, when the front face coefficients calculated for the collected facial images are invalid cameras, the corresponding candidate devices do not participate in the competition for control rights; conversely, when the collected facial images If the calculated front face coefficient is a valid camera, the corresponding candidate device can be used as a valid candidate device. Based on its calculated front face coefficient, it can be determined whether the candidate device is a valid candidate device.
  • S340 Determine whether the valid candidate device is the target device. If the valid candidate device is the target device, proceed to S370; if the valid candidate device is not the target device, proceed to S350.
  • the status indicators corresponding to each candidate device that has not obtained control turn off.
  • the corresponding indicator light is set to a non-focus state, that is, an off state (prompts the user that the computer has lost control authority).
  • the candidate device status indicator lights up.
  • the status indicator light corresponding to the candidate device that has obtained control rights is turned on, and the status indicator light corresponding to the candidate device that has obtained control rights is set to the focus state, that is, the lit state (prompts the user which computer has obtained the control authority).
  • S380 Determine whether historical cursor information exists. If historical cursor information exists, proceed to S390; if historical cursor information does not exist, initialize the cursor information (for example, initialize the cursor to the middle of the screen).
  • FIG. 4 is a schematic diagram of a person's face orientation according to an embodiment of the present application.
  • the facial image of the target object collected by the image acquisition device is any of the facial images shown in Figure 4
  • the calculated front face coefficient is the effective front face coefficient.
  • the face orientation may include but is not limited to the five face orientations in Figure 4, as long as the effective face area can be identified.
  • the calculation principle of the frontal face coefficient is as follows: when observing a person's face from different angles in the same horizontal direction, the area of the left and right faces of a person is approximately equal when viewed from the front. When viewed from the side, the areas of the left and right faces of a person will appear larger, and As the deviation angle (relative to the frontal angle) increases, the difference in area between the left and right faces of a person will become more and more obvious. Based on this principle, in the polling timing task, each image collection device collects different avatar picture data of the user at the same time, and calculates different front face coefficients.
  • FIG. 5 is a schematic diagram of face image segmentation provided according to an embodiment of the present application. As shown in Figure 5, the face image can be segmented into the following areas:
  • the face information in the image is first extracted, the face boundary line 110 is identified, and a closed-loop area of the face boundary is formed.
  • Obtain the positions of the left eye 101 and the right eye 102 in the face information confirm the left eye center point 103 and the right eye center point 104, and connect the left eye center point 103 and the right eye center point 104 And make an extension line, and use this extension line as the first collection dividing line.
  • the central position point between the eye distance center position point 105 is used as the center position point between the left mouth corner point 106 and the right mouth corner point 107 as the left and right center position point of the mouth 108, connecting the eye distance center position point 105 and the left and right mouth distance point.
  • the center point 108 serves as the left and right face dividing line 109, and the left and right face dividing line 109 can pass through the nose 113.
  • the effective face area is determined according to the face boundary line 110, the first collection boundary line 117 and the second collection boundary line 118.
  • the first collection boundary line 117 and the second collection boundary line 118 respectively intersect with the left side of the face boundary line 110.
  • the face boundary line between the points is regarded as the left face boundary line 111
  • the face boundary line between the intersection points of the first acquisition boundary line 117 and the second acquisition boundary line 118 respectively with the right side of the face boundary line 110 is regarded as the right face.
  • the area surrounded by the first collection dividing line 117, the second collection dividing line 118, the left face boundary line 111 and the left and right face dividing lines 109 is the left face area 115; the first collection dividing line 117, the second collection dividing line 118, the right face area The area surrounded by the face boundary line 112 and the left and right face dividing lines 109 is the right face area 114 .
  • FIG. 6 is a flow chart of a front face coefficient calculation method provided according to an embodiment of the present application. This embodiment explains the calculation process of the front face coefficient based on the above embodiment. As shown in Figure 6, the method includes:
  • S4020 Identify whether there is a face in the collected image. If there is a face, go to S4030; if there is no face, go to S4130.
  • the center positions of the two eyes are identified in the closed-loop region of the face boundary. For example, in a two-dimensional plane, assume that the center position coordinates of the left and right eyes are (x el , y el ) and (x er , y er ) respectively.
  • the left and right mouth corners are identified in the closed-loop area of the face boundary. For example, in a two-dimensional plane, it is assumed that the coordinates of the left corner of the mouth and the right corner of the mouth are (x ml , y ml ) and (x mr , y mr ) respectively.
  • the calculated eye distance center point (x em , y em ) and the calculated mouth corner center point (x mm , y mm ) can be connected as the dividing line between the left face and the right face.
  • the face area can be divided in the vertical direction, and the center point of the left eye and the center point of the right eye of the face are connected and Extend it as the upper boundary line of face collection (that is, the first collection boundary line).
  • the upper boundary line of face collection that is, the first collection boundary line
  • connect and extend the left mouth corner point and the right mouth corner point of the human face as the lower boundary line of face collection ie, the second collection boundary line.
  • the effective face area can be set as region face .
  • the effective face region face can be divided into the left face region region 1-face and the right face region region r-face according to the dividing line between the left and right faces.
  • the method of calculating the area of the left and right faces is not limited.
  • the values of the area of the left face area and the area of the right face can be quantified using pixel points, that is, the area of the left face area and the area of the right face are quantified.
  • the number of pixels in the area is counted, and the area of the left face area and the number of pixels in the right face area are used as the quantified value of the corresponding area.
  • the front face coefficient can be set to ⁇ .
  • the denominator can be a positive number whose left limit is infinitely close to 0 to ensure that the calculation can proceed normally.
  • the calculation formula of the front face coefficient ⁇ is as follows:
  • FIG. 7 is a schematic structural diagram of an operation authority control device provided according to an embodiment of the present application.
  • the device includes: an image acquisition module 71, a device determination module 72, and a permission switching module 73.
  • the image acquisition module 71 is configured to acquire the facial image of the target object collected by each image acquisition device.
  • the device determination module 72 is configured to determine the target device in the candidate device set according to the front face coefficient of each facial image; wherein the candidate device set includes at least two candidate devices, and the front face coefficient is used to determine the candidate device toward which the target object is oriented. , determine the candidate device toward which the target object is oriented as the target device.
  • the permission switching module 73 is configured to switch the operation target of the external device to the target device, so as to operate and control the target device through the external device.
  • the target object's facial image is collected through the image acquisition module, and the collected facial image is confirmed with the corresponding front face coefficient according to the device determination module.
  • candidate devices can be screened to obtain the corresponding target device, and the corresponding target device can be obtained through permissions
  • the switching module automatically switches the operating permissions of the external device to the target device to control the target device through the external device, automatically switching the operating permissions of the external device according to the target object's facial orientation, and improving the convenience of switching the operating permissions of the external device. Improved user experience.
  • the image acquisition module 71 includes:
  • the instruction sending unit is configured to send an image acquisition instruction to each image acquisition device, so that each image acquisition device collects facial images of the target object.
  • the image acquisition unit is configured to sequentially acquire the facial images of the target object collected by each image acquisition device in a timed polling manner.
  • the device determination module 72 includes:
  • the front face coefficient acquisition unit is configured to determine the front face coefficient of each facial image and obtain a set of front face coefficients.
  • the front face coefficient screening unit is configured to filter the front face coefficient set according to the preconfigured front face coefficient threshold value to obtain the corresponding effective front face coefficient set.
  • the device confirmation unit is configured to determine the target device in the candidate device set according to the effective front face coefficient set.
  • the front face coefficient acquisition unit includes:
  • a face recognition unit configured to recognize and extract the initial face region in each facial image.
  • the dividing line confirmation unit is configured to determine the dividing line between the left and right faces based on the pre-obtained eye distance center position and mouth center position.
  • the area segmentation unit is configured to segment the initial face area according to the dividing line between the left and right faces to obtain the corresponding left face area and right face area.
  • the front face coefficient confirmation unit is configured to determine the front face coefficient of the corresponding facial image based on the area of the left face area and the area of the right face area.
  • the set creation unit is configured to form a corresponding front face coefficient set from the front face coefficients of each facial image.
  • the area segmentation unit includes:
  • the first boundary line collection unit is configured to determine the first collection boundary line of the initial face area based on the center positions of the two eyes.
  • the second boundary line collection unit is configured to determine the second collection boundary line of the initial face area based on the positions of the two corners of the mouth.
  • the area interception unit is configured to intercept the effective face area in the initial face area according to the first collection boundary line and the second collection boundary line.
  • the area separation unit is configured to segment the effective face area according to the dividing line between the left and right faces to obtain the corresponding left face area and right face area.
  • the device confirmation unit includes:
  • the device confirmation unit is configured to determine the corresponding set of valid image collection devices based on the valid front face coefficients.
  • the device confirmation unit is configured to filter the candidate device set according to the extended connection line corresponding to each valid image capture device in the valid image capture device set, and obtain the corresponding valid device set.
  • the target device confirmation unit is configured to determine the corresponding target device based on the front face coefficient corresponding to each device in the valid device set.
  • control device for operation authority after determining the target device in the candidate device set according to the front face coefficient of each facial image, the control device for operation authority further includes:
  • the cursor data acquisition unit is configured to acquire historical cursor data of the target device.
  • the cursor position confirmation unit is configured to determine the cursor starting position of the external device on the display screen corresponding to the target device based on historical cursor data.
  • control device for operation authority after determining the target device in the candidate device set according to the front face coefficient of each facial image, the control device for operation authority further includes:
  • the cursor initialization unit is configured to initialize the cursor starting position of the external device to the center position of the display screen corresponding to the target device.
  • control device for operation authority after determining the target device in the candidate device set according to the front face coefficient of each facial image, the control device for operation authority further includes:
  • the indicator light lighting unit is configured to send a turn-on instruction to the status indicator light of the target device so that the status indicator light lights up.
  • the operation authority control device provided by the embodiments of this application can execute the operation authority control method provided by any embodiment of this application, and has corresponding functional modules and beneficial effects for executing the method.
  • FIG. 8 is a schematic structural diagram of an operation authority control system provided according to an embodiment of the present application.
  • the operation authority control system includes: an external device 81, a controller 82, and at least two image acquisition devices. 83 and at least two candidate devices 84.
  • the external device 81 is electrically connected to the controller 82; the controller 82 is electrically connected to each candidate device 84 and each image acquisition device 83 through an expansion connection line 85; the candidate device 84 and the image acquisition device 83 correspond one to one.
  • the controller 82 acquires the facial image of the target object collected by each image acquisition device 83, determines the target device according to the front face coefficient of each facial image and the candidate device 84, and automatically switches the operating authority of the external device to the target device. , to operate and control the target device through the external device 81 .
  • the external device 81 is configured to control the target device, and the target device can be determined according to the front face coefficient of each facial image and the candidate device 84 .
  • the controller 82 is configured to obtain the facial image of the target object collected by each image acquisition device 83, determine the target device according to the front face coefficient of each facial image and the candidate device 84, and automatically transfer the operation authority of the external device 81 Switch to the target device.
  • the image acquisition device 83 is configured to acquire a facial image of the target object.
  • Candidate devices 84 are configured to provide selection devices for screening of target devices.
  • the expansion connection line 85 is configured to connect the controller 82 to each candidate device 84 and each image acquisition device 83 .
  • the expansion connection line 85 may refer to a connection line through which the controller 82 controls and expands the candidate device 84 , and may include, for example, a serial universal bus or the like.
  • the controller 82 is electrically connected to each candidate device 84 and each image acquisition device 83 through an expansion connection line 85
  • the external device 81 is electrically connected to the controller 82
  • the candidate devices 84 correspond to the image acquisition device 83 one-to-one.
  • the controller 82 can obtain the facial image of the target object collected by each image acquisition device 83, determine the target device according to the front face coefficient of each facial image and the candidate device 84, and automatically switch the operation authority of the external device 81 to The target device, the external device 81 controls the target device.
  • FIG. 9 is a schematic structural diagram of an operation authority control system provided according to an embodiment of the present application, wherein the operation authority control system further includes: at least two status indicator lights 86 .
  • the controller 82 is electrically connected to each status indicator light 86 through an expansion connection line 85; the candidate device 84 corresponds to the status indicator light 86 one-to-one.
  • the controller 82 sends a turn-on instruction to the status indicator light 86 of the target device, so that the status indicator light 86 lights up.
  • the status indicator light 86 is used to light up after the target device is determined according to the front face coefficient of each facial image and the candidate device 84 set, and is used to indicate the corresponding target device.
  • FIG. 10 is an example diagram of an operation authority control system provided according to an embodiment of the present application.
  • the candidate device 84 as the computer 841
  • the number is 5
  • the image acquisition device 83 is the camera 831
  • the number of the camera 831 and the status indicator light 86 are 5 respectively
  • the external device 81 is composed of a keyboard 811 and a mouse 812.
  • An example diagram of an operation authority control system is described.
  • the operation authority control system includes: keyboard 811, mouse 812, controller 82, 5 cameras 831, 5 computers 841 and 5 status indicators 86.
  • the keyboard 811 and the mouse 812 are electrically connected to the controller 82; the controller 82 is electrically connected to each computer 841, each camera 831 and each status indicator light 86 through an expansion cable 85; the computer 841 is connected to the camera 831 and the status indicator. Lamps 86 correspond one to one.
  • the controller 82 can obtain the facial image of the target object collected by each camera 831, determine the target device according to the front face coefficient of each facial image and the computer 841, and automatically switch the operating authority of the external device to the target device, so as to The target device is operated and controlled through the keyboard 811 and the mouse 812, and at the same time, the controller 82 sends a turn-on instruction to the status indicator light 86 of the target device, so that the status indicator light 86 lights up.
  • the keyboard 811 and the mouse 812 can be connected to the controller 82 of the device.
  • the controller 82 is expanded through the interface and connected to different computers 841 respectively (the schematic diagram only shows five computers 841. In actual use, the controller 82 can be connected to different computers 841 according to the actual situation. The situation will be expanded).
  • the expansion connection cable 85 from the controller 82 to the computer 841 includes a control line (such as USB, etc.) for controlling each computer 841, and the expansion connection cable 85 connecting each computer 841 also includes the connection of the camera 831. Connecting wire, the connecting wire of the status indicator light 86.
  • the controller 82 uses a scheduled task polling method to compete for the control rights of the expansion connection line 85 (in order to ensure the response speed, millisecond-level time intervals can be set).
  • each expansion camera 831 collects the user's data respectively. Avatar information, and then send the collected pictures to the controller 82.
  • the controller 82 can identify the face information in the image based on the user's avatar information collected by the camera 831, and calculate the face frontal coefficient based on the face information. According to the front face coefficient of the user's avatar picture collected by each camera 831, the extended connection line 85 to obtain control rights is selected.
  • the status indicator light 86 corresponding to the expansion connection line 85 that has obtained control is turned on, and the historical mouse focus data of the corresponding computer 841 is queried. If it exists, it is loaded and the mouse is initialized to the position of the focus data. If not present, the mouse position can be initialized to the center of the screen.
  • the status indicator light 86 corresponding to each expansion connection line 85 that has not obtained control rights is turned off, and the current mouse focus is stored (the mouse position is initialized when subsequent control rights are obtained).
  • the status indicator light 86 corresponding to the computer 841 can change the state to the focus state, that is, the status indicator light 86 lights up, and the computer 841 is queried at the same time.
  • the historical mouse 812 cursor start position if the historical mouse 812 cursor start position exists, load it, and load the mouse 812 cursor start position as the position of the historical cursor data; if the historical mouse 812 cursor start position does not exist, then load the mouse 812 cursor start position The position is initialized to the center of the screen.
  • the corresponding status indicators of the remaining four computers 841 can change the state to the non-focus state, that is, the corresponding status indicators 86 are extinguished, and at the same time, the current mouse 812 cursor starting position is stored.
  • the computer 841 corresponding to the status indicator light 86 in the lit state can obtain the control authority of the keyboard 811 and the mouse 812 .
  • the user can select the computer 841 that needs to be controlled by turning the head.
  • the computer 841 that currently obtains the operating rights is determined through the status indicator light 86 .
  • the focused computer 841 accepts the user's input.
  • the computer 841 that has not obtained the focus blocks the user's keyboard 811 and mouse 812 operations.
  • FIG. 11 is a schematic structural diagram of an electronic device 10 that implements an operation authority control method according to an embodiment of the present application.
  • Electronic devices are intended to refer to various forms of digital computers, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers.
  • Electronic devices may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices (eg, helmets, glasses, watches, etc.), and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions are examples only and are not intended to limit the implementation of the present application as described and/or claimed herein.
  • the electronic device 10 includes at least one processor 11, and a memory communicatively connected to the at least one processor 11, such as a read-only memory (Read-Only Memory, ROM) 12, a random access memory (Random Access Memory, RAM) 13, etc., wherein the memory stores a computer program that can be executed by at least one processor, and the processor 11 can be loaded into the random access memory (RAM) according to the computer program stored in the read-only memory (ROM) 12 or from the storage unit 18.
  • Computer program in RAM) 13 to perform various appropriate actions and processing.
  • various programs and data required for the operation of the electronic device 10 can also be stored.
  • the processor 11, ROM 12 and RAM 13 are connected to each other via a bus 14.
  • An input/output (I/0) interface 15 is also connected to the bus 14 .
  • the I/O interface 15 Multiple components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16, such as a keyboard, a mouse, etc.; an output unit 17, such as various types of displays, speakers, etc.; a storage unit 18, such as a magnetic disk, an optical disk, etc. etc.; and communication unit 19, such as network card, modem, wireless communication transceiver, etc.
  • the communication unit 19 allows the electronic device 10 to exchange information/data with other devices through computer networks such as the Internet and/or various telecommunications networks.
  • Processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the processor 11 include, but are not limited to, a central processing unit (Central Processing Unit, CPU), a graphics processing unit (Graphics Processing Unit, GPU), various dedicated artificial intelligence (Artificial Intelligence, AI) computing chips, various running Machine learning model algorithm processor, digital signal processor (Digital Signal Processor, DSP), and any appropriate processor, controller, microcontroller, etc.
  • the processor 11 executes various methods and processes described above, such as an operation authority control method.
  • a method for controlling operation permissions may be implemented as a computer program, which is tangibly included in a computer-readable storage medium, such as the storage unit 18 .
  • part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19.
  • the processor 11 may be configured to perform an operation authority control method through any other suitable means (for example, by means of firmware).
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSP Application Specific Standard Product
  • SOC System on Chip
  • CPLD Complex Programmable Logic Device
  • These various embodiments may include implementation in one or more computer programs executable and/or interpreted on a programmable system including at least one programmable processor, the programmable processor
  • the processor which may be a special purpose or general purpose programmable processor, may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device.
  • An output device may be a special purpose or general purpose programmable processor, may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device.
  • An output device may be a special purpose or general purpose programmable processor, may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device.
  • Computer programs for implementing the methods of the present application may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, such that the computer program, when executed by the processor, causes the functions/operations specified in the flowcharts and/or block diagrams to be implemented.
  • a computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
  • a computer-readable storage medium may be a tangible medium that may contain or store a computer program for use by or in connection with an instruction execution system, apparatus, or device.
  • Computer-readable storage media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any suitable combination of the foregoing.
  • the computer-readable storage medium may be a machine-readable signal medium.
  • machine-readable storage media examples include one or more wire-based electrical connections, portable computer disks, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (Erasable Programmable Read-Only Memory (EPROM) or flash memory, optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above .
  • the storage medium may be a non-transitory storage medium.
  • the systems and techniques described herein may be implemented on an electronic device having a display device (e.g., a cathode ray tube (CRT) or liquid crystal) for displaying information to the user.
  • a display device e.g., a cathode ray tube (CRT) or liquid crystal
  • a display Liquid Crystal Display, LCD monitor
  • a keyboard and pointing device e.g., a mouse or a trackball
  • Other kinds of devices may also be used to provide interaction with the user; for example, the feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and may be provided in any form, including Acoustic input, voice input or tactile input) to receive input from the user.
  • the systems and techniques described herein may be implemented in a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., A user's computer having a graphical user interface or web browser through which the user can interact with implementations of the systems and technologies described herein), or including such backend components, middleware components, or any combination of front-end components in a computing system.
  • the components of the system may be interconnected by any form or medium of digital data communication (eg, a communications network). Examples of communication networks include: Local Area Network (LAN), Wide Area Network (WAN), blockchain network, and the Internet.
  • Computing systems may include clients and servers.
  • Clients and servers are generally remote from each other and typically interact over a communications network.
  • the relationship of client and server is created by computer programs running on corresponding computers and having a client-server relationship with each other.
  • the server can be a cloud server, also known as cloud computing server or cloud host. It is a host product in the cloud computing service system to solve the problems that exist in traditional physical host and virtual private server (VPS) services. It has the disadvantages of difficult management and weak business scalability.
  • VPN virtual private server
  • This application provides an operation authority control method, control system, electronic device and storage medium to improve the situation in related technologies where device control rights need to be manually switched, and to automatically switch the operation authority of external devices simply and quickly, thereby improving improve the user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Collating Specific Patterns (AREA)
  • Image Processing (AREA)

Abstract

La présente demande divulgue un procédé et un système de commande d'autorisation d'opération, ainsi qu'un dispositif électronique et un support de stockage. Le procédé de commande d'autorisation d'opération consiste à : acquérir une image faciale d'un objet cible qui est collecté par chaque appareil de collecte d'image ; déterminer un dispositif cible à partir d'un ensemble de dispositifs candidats en fonction d'un coefficient de face frontale de chaque image faciale, l'ensemble de dispositifs candidats comprenant au moins deux dispositifs candidats, le coefficient de face frontale étant utilisé pour déterminer un dispositif candidat auquel fait face l'objet cible, et le dispositif candidat auquel fait face l'objet cible étant déterminé comme étant le dispositif cible ; et commuter la cible d'opération d'un dispositif externe vers le dispositif cible, de façon à effectuer une commande d'opération sur le dispositif cible au moyen du dispositif externe.
PCT/CN2023/071308 2022-08-25 2023-01-09 Procédé et système de commande d'autorisation d'opération, dispositif électronique et support de stockage WO2024040861A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211027290.X 2022-08-25
CN202211027290.XA CN115454237A (zh) 2022-08-25 2022-08-25 操作权限的控制方法、控制系统、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2024040861A1 true WO2024040861A1 (fr) 2024-02-29

Family

ID=84298723

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/071308 WO2024040861A1 (fr) 2022-08-25 2023-01-09 Procédé et système de commande d'autorisation d'opération, dispositif électronique et support de stockage

Country Status (2)

Country Link
CN (1) CN115454237A (fr)
WO (1) WO2024040861A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115454237A (zh) * 2022-08-25 2022-12-09 湖北星纪时代科技有限公司 操作权限的控制方法、控制系统、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015050545A1 (fr) * 2013-10-03 2015-04-09 Thomson Licensing Gestion multi-écran améliorée
CN108919982A (zh) * 2018-06-14 2018-11-30 北京理工大学 一种基于人脸朝向识别的自动键鼠切换方法
CN109803450A (zh) * 2018-12-12 2019-05-24 平安科技(深圳)有限公司 无线设备与电脑连接方法、电子装置及存储介质
CN112214103A (zh) * 2020-08-28 2021-01-12 深圳市修远文化创意有限公司 一种无线控制设备连接电脑的方法以及相关装置
CN115454237A (zh) * 2022-08-25 2022-12-09 湖北星纪时代科技有限公司 操作权限的控制方法、控制系统、电子设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015050545A1 (fr) * 2013-10-03 2015-04-09 Thomson Licensing Gestion multi-écran améliorée
CN108919982A (zh) * 2018-06-14 2018-11-30 北京理工大学 一种基于人脸朝向识别的自动键鼠切换方法
CN109803450A (zh) * 2018-12-12 2019-05-24 平安科技(深圳)有限公司 无线设备与电脑连接方法、电子装置及存储介质
CN112214103A (zh) * 2020-08-28 2021-01-12 深圳市修远文化创意有限公司 一种无线控制设备连接电脑的方法以及相关装置
CN115454237A (zh) * 2022-08-25 2022-12-09 湖北星纪时代科技有限公司 操作权限的控制方法、控制系统、电子设备及存储介质

Also Published As

Publication number Publication date
CN115454237A (zh) 2022-12-09

Similar Documents

Publication Publication Date Title
WO2017181769A1 (fr) Procédé, appareil et système, dispositif, et support de stockage de reconnaissance faciale
CN106170978B (zh) 深度图生成装置、方法和非短暂性计算机可读介质
US10440284B2 (en) Determination of exposure time for an image frame
CN107231470B (zh) 图像处理方法、移动终端及计算机可读存储介质
WO2020237611A1 (fr) Procédé et appareil de traitement d'image, terminal de commande et dispositif mobile
JP2016531362A (ja) 肌色調整方法、肌色調整装置、プログラム及び記録媒体
CN105787884A (zh) 一种图像处理方法及电子设备
WO2019129020A1 (fr) Procédé de mise au point automatique d'un appareil photo, dispositif de stockage, et terminal mobile
WO2016070688A1 (fr) Procédé et système de commande à distance pour une interface d'exploitation virtuelle
WO2022152001A1 (fr) Procédé et appareil de reconnaissance de geste, dispositif électronique, support de stockage lisible et puce
WO2024040861A1 (fr) Procédé et système de commande d'autorisation d'opération, dispositif électronique et support de stockage
CN105306819B (zh) 一种基于手势控制拍照的方法及装置
WO2019041147A1 (fr) Procédé, dispositif et système de reconnaissance de point
KR20220157485A (ko) 검출 결과 출력 방법, 전자 장치 및 매체
WO2017152592A1 (fr) Procédé de fonctionnement d'application de terminal mobile et terminal mobile
CN108197560B (zh) 人脸图像识别方法、移动终端及计算机可读存储介质
CN107888829A (zh) 移动终端的对焦方法、移动终端及存储介质
CN109840476B (zh) 一种脸型检测方法及终端设备
CN112446251A (zh) 图像处理方法及相关装置
WO2019061466A1 (fr) Procédé de commande de vol, dispositif de commande à distance et système de commande à distance
CN109104573B (zh) 一种确定对焦点的方法及终端设备
CN104378576B (zh) 一种信息处理方法及电子设备
US20230368177A1 (en) Graphic code display method, terminal and storage medium
CN110338750B (zh) 一种眼球追踪设备
CN111385481A (zh) 图像处理方法及装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23855958

Country of ref document: EP

Kind code of ref document: A1