CN115454237A - Control method and control system of operation authority, electronic device and storage medium - Google Patents

Control method and control system of operation authority, electronic device and storage medium Download PDF

Info

Publication number
CN115454237A
CN115454237A CN202211027290.XA CN202211027290A CN115454237A CN 115454237 A CN115454237 A CN 115454237A CN 202211027290 A CN202211027290 A CN 202211027290A CN 115454237 A CN115454237 A CN 115454237A
Authority
CN
China
Prior art keywords
face
target
equipment
candidate
coefficient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211027290.XA
Other languages
Chinese (zh)
Inventor
王淼军
杨瑞朋
曹阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Xingji Shidai Technology Co Ltd
Original Assignee
Hubei Xingji Shidai Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Xingji Shidai Technology Co Ltd filed Critical Hubei Xingji Shidai Technology Co Ltd
Priority to CN202211027290.XA priority Critical patent/CN115454237A/en
Publication of CN115454237A publication Critical patent/CN115454237A/en
Priority to PCT/CN2023/071308 priority patent/WO2024040861A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application discloses a control method, a control system, electronic equipment and a storage medium of operation authority. The control method of the operation authority comprises the following steps: acquiring a face image of a target object acquired by each image acquisition device; determining a target device in a candidate device set according to the face coefficient of each face image; wherein the set of candidate devices includes at least two candidate devices, the front face coefficient is used to determine the candidate device for the target object orientation, and the candidate device for the target object orientation is determined as the target device; and switching an operation target of the external equipment to the target equipment so as to carry out operation control on the target equipment through the external equipment. According to the method and the device, the operation authority of the external device can be automatically switched to the target device according to the face orientation of the target object, the operation authority of the external device can be automatically switched, and the use experience of a user is improved.

Description

Control method and control system of operation authority, electronic device and storage medium
Technical Field
The present application relates to the field of intelligent control technologies, and in particular, to a method, a system, an electronic device, and a storage medium for controlling operation authority.
Background
In some usage scenarios of computers (e.g., scenarios of multiple servers or multiple computers such as an information control center, a call center, or a data center), sometimes a set of external devices (e.g., a set of mouse and keyboard) is required to control multiple computers.
In the prior art, a KVM (Keyboard Video Mouse) technology may be used to implement a set of external devices to control multiple computers. However, in the process of switching the control right of the device, manual switching is often required to be performed by means of a physical button or an on-screen button, so that manual operation steps of a user are additionally added.
Disclosure of Invention
The application provides a control method, a control system, an electronic device and a storage medium for operation permission, which are used for solving the problem of manually switching the control permission of the device in the prior art, and automatically switching the operation permission of the external device simply and quickly, so that the use experience of a user is improved.
According to a first aspect of the present application, an embodiment of the present application provides a method for controlling an operation authority, where the method includes:
acquiring a face image of a target object acquired by each image acquisition device;
determining a target device in the candidate device set according to the positive face coefficient of each face image; the candidate device set at least comprises two candidate devices, the face coefficient is used for determining the candidate device oriented to the target object, and the candidate device oriented to the target object is determined as the target device;
and switching the operation target of the external equipment to the target equipment so as to carry out operation control on the target equipment through the external equipment.
According to a second aspect of the present application, an embodiment of the present application provides a control device for an operation authority, where the device includes:
the image acquisition module is used for acquiring the facial image of the target object acquired by each image acquisition device;
a device determination module for determining a target device in the candidate device set according to the face coefficient of each face image; the candidate device set at least comprises two candidate devices, the face coefficient is used for determining the candidate device oriented to the target object, and the candidate device oriented to the target object is determined as the target device;
and the permission switching module is used for switching the operation target of the external equipment to the target equipment so as to carry out operation control on the target equipment through the external equipment.
According to a third aspect of the present application, an embodiment of the present application provides a control system for an operation authority, including: the system comprises external equipment, a controller, at least two image acquisition devices and at least two candidate devices; wherein, the external equipment is electrically connected with the controller; the controller is electrically connected with each candidate device and each image acquisition device through the expansion connecting line; the candidate equipment corresponds to the image acquisition devices one by one;
the controller acquires the face image of the target object acquired by each image acquisition device, determines the target equipment in the candidate equipment according to the face coefficient of each face image, wherein the face coefficient is used for determining the candidate equipment oriented to the target object, determining the candidate equipment oriented to the target object as the target equipment, and switching the operation target of the external equipment to the target equipment so as to perform operation control on the target equipment through the external equipment.
According to a fourth aspect of the present application, an embodiment of the present application provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform a method of controlling an operating right of any of the embodiments of the present application.
According to a fifth aspect of the present application, an embodiment of the present application provides a computer-readable storage medium, where computer instructions are stored, and the computer instructions are configured to, when executed, cause a processor to implement a method for controlling operation authority of any embodiment of the present application.
According to the technical scheme, the face images acquired by the plurality of image acquisition devices are used, the face coefficient of each face image is determined, the corresponding target equipment is screened from the candidate equipment set according to the face coefficient, the operation target of the external equipment is switched to the target equipment, the operation authority of the external equipment is automatically switched to the target equipment according to the face orientation of a target object, the technical problem that operation control needs to be manually carried out in the prior art is solved, automatic switching of the operation authority of the external equipment is simply and rapidly achieved, and the use experience of users is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a method for controlling operation authority according to an embodiment of the present application;
fig. 2 is a flowchart of a method for controlling operation authority according to an embodiment of the present application;
fig. 3 is a flowchart of a method for controlling operation authority according to an embodiment of the present application;
FIG. 4 is a schematic view of a face orientation provided according to an embodiment of the present application;
FIG. 5 is a schematic view of a face image segmentation provided according to an embodiment of the present application;
FIG. 6 is a flowchart of a front face coefficient calculation method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a control device for controlling operation authority according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a control system for operation authority according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a control system for operation authority according to an embodiment of the present application;
FIG. 10 is a diagram of an example of a control system for operating rights according to an embodiment of the application;
fig. 11 is a schematic structural diagram of an electronic device implementing a method for controlling operation authority according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
An electronic device (also referred to as User Equipment (UE)) in this embodiment is a device having a Wireless transceiving function, and the electronic device may communicate with a core Network via a Wireless Local Area Network (WLAN) or a Cellular Network (Cellular Network), and exchange voice and/or data with a RAN. The electronic device may be deployed on land, including indoors or outdoors, hand-held or vehicle-mounted; can also be deployed on the water surface (such as a ship and the like); and may also be deployed in the air (e.g., airplanes, balloons, satellites, etc.). The electronic device may be a mobile phone (mobile phone), a tablet computer (pad), a computer with a wireless transceiving function, a Virtual Reality (VR) terminal, an Augmented Reality (AR) terminal, a wireless terminal in industrial control (industrial control), a wireless terminal in self driving (self driving), a wireless terminal in remote medical (remote medical), a wireless terminal in smart grid (smart grid), a wireless terminal in transportation safety (transportation safety), a wireless terminal in smart city (smart city), a wireless terminal in smart home (smart home), and the like.
The electronic device may include, for example, a User Equipment (UE), a wireless terminal device, a mobile terminal device, a device-to-device communication (D2D) terminal device, a vehicle-to-outside (V2X) terminal device, a machine-to-machine/machine-type communication (M2M/MTC) terminal device, an internet of things (internet technologies, ioT) terminal equipment, subscriber unit (subscriber unit), subscriber station (subscriber station), mobile station (mobile station), remote station (remote station), access Point (AP), remote terminal (remote terminal), subscriber station (customer premise equipment, CPE), fixed Wireless Access (FWA), access terminal (access terminal), user terminal (user terminal), user agent (user agent), or user equipment (user device), etc. For example, mobile telephones (otherwise known as "cellular" telephones), computers with mobile terminal equipment, portable, pocket, hand-held, computer-included mobile devices, and the like may be included. For example, personal Communication Service (PCS) phones, cordless phones, session Initiation Protocol (SIP) phones, wireless Local Loop (WLL) stations, personal Digital Assistants (PDAs), and the like. Also included are constrained devices, such as devices that consume less power, or devices that have limited storage capabilities, or devices that have limited computing capabilities, etc. Examples of information sensing devices include bar codes, radio Frequency Identification (RFID), sensors, global Positioning Systems (GPS), laser scanners, and so forth.
In an embodiment, fig. 1 is a flowchart of a method for controlling operation permissions according to an embodiment of the present application, where the embodiment is applicable to a case where operation permissions of an external device are automatically switched, and the method may be executed by a control device of the operation permissions, where the control device of the operation permissions may be implemented in a form of hardware and/or software, and the control device of the operation permissions may be configured in an electronic device. Illustratively, the electronic device may be a controller. As shown in fig. 1, the method includes:
and S110, acquiring the facial image of the target object acquired by each image acquisition device.
The image acquisition device is used for acquiring images of the target object. In the actual operation process, the type of the image acquisition device is not limited. For example, the image capturing device may include, but is not limited to, a camera, an infrared image sensor, or an infrared camera. In the actual operation process, the image acquisition device can be installed at any position without shielding of the visual field, and only the facial image of the target object needs to be accurately acquired, and the image acquisition device can be independent, such as an independent camera, an infrared image sensor or infrared camera equipment, and the like, and can also be integrated into candidate equipment in an integrated mode. Of course, in order to ensure image acquisition of the target object in various directions, a plurality of image acquisition devices may be uniformly arranged around the target object. It is understood that the orientation of the target object in the facial image captured by each image capturing device is different.
In an embodiment, the face image may refer to a face image of the target object, and exemplarily, the face image may include a front face image, a side face image, and the like of the target object. In an embodiment, the image of the face of the target object is captured by the image capturing device as an image of the face of the target object.
In an embodiment, each image capture device may actively report a captured facial image of the target object; alternatively, the controller may actively pull the face image of the target object acquired by each image acquisition device, which is not limited thereto. In an embodiment, under the condition that each image capturing device actively reports the captured facial image of the target object, each image capturing device may report the captured facial image to the controller at the same time, or may report the captured facial image to the controller sequentially according to a preset reporting sequence. Similarly, under the condition that the controller actively pulls the facial image of the target object acquired by each image acquisition device, the controller can actively acquire the facial image acquired by each image acquisition device, can simultaneously acquire the facial image acquired by each image acquisition device, and can also sequentially acquire the facial images acquired by each image acquisition device according to the sequence. The preset reporting sequence may be determined according to the sequence number of the image acquisition device.
S120, determining target equipment in the candidate equipment set according to the positive face coefficient of each face image; the candidate device set at least comprises two candidate devices, the face coefficient is used for determining the candidate device oriented to the target object, and the candidate device oriented to the target object is determined as the target device.
Wherein the face coefficients are used to characterize the degree of frontal orientation of the face of the target object relative to each candidate device. In an embodiment, the positive face coefficient may be determined from the area of the left side face region and the area of the right side face region in each face image. The calculation of the face correction coefficient may include subtracting 1 from a ratio of the left face area to the right face area, and the smaller the value of the face correction coefficient is, the higher the degree of the face facing the candidate device may be considered, and conversely, the lower the degree of the face facing the candidate device may be considered. The candidate device set may refer to a set of all candidate devices connected by one controller; the candidate devices may include electronic devices such as computers or servers that have data storage, data computation, and the like. In actual operation, the number of candidate devices may not be limited as long as the number of candidate devices connected to each controller is at least two. That is, the candidate device set includes at least two candidate devices. The target device refers to a device with the highest front orientation degree with the target object in the candidate device set. It is understood that all candidate devices in the candidate device set are screened according to the face coefficient of each face image to obtain the candidate device with the highest degree of face orientation with the target object as the target device. In an embodiment, the number of candidate devices included in the candidate device set may be at least two. The candidate devices and the image capturing devices may be in one-to-one correspondence, that is, the number of the candidate devices is the same as the number of the image capturing devices, and each candidate device may correspond to one image capturing device.
The face images of the target object, which are acquired by the image acquisition devices corresponding to each candidate device in the candidate device set, are all different, the front face coefficients corresponding to different face images may be different, and the target device may be determined by comparing the front face coefficients corresponding to each candidate device. It is to be understood that the positive face coefficient may be a result of calculation from face images corresponding to respective candidate devices, and from the positive face coefficient of each face image, the target device may be determined in the candidate device set. For example, the candidate device corresponding to the face image with the smallest positive face coefficient value may be determined as the target device.
And S130, switching the operation target of the external equipment to the target equipment so as to carry out operation control on the target equipment through the external equipment.
The operation target may refer to a target that the external device issued by the controller has a control right, and when the operation target is changed to the target device, the external device may control the target device.
In an embodiment, after the target device is determined in the candidate device set according to the positive face coefficient of each face image, the controller may switch the operation target to the target device, and after the operation target of the external device is changed to the target device, may perform operation control on the target device through the external device.
According to the embodiment of the application, the image acquisition device is used for acquiring the facial image of the target object, the corresponding face coefficient is confirmed according to the acquired facial image, the candidate equipment can be screened according to the face coefficient to obtain the corresponding target equipment, the operation target of the external equipment is switched to the target equipment, the target equipment is operated and controlled through the external equipment, the operation permission of the external equipment is automatically switched according to the face orientation of the target object, the convenience of switching the operation permission of the external equipment is improved, and the use experience of a user is improved.
In an embodiment, after determining the target device in the candidate device set according to the positive face coefficient of each face image, further comprising: acquiring historical cursor data of target equipment; and determining the cursor starting position of the external equipment on the display screen corresponding to the target equipment according to the historical cursor data.
The historical cursor data may refer to a position where the external device stays on the target device when the target device has recently acquired the operation right of the external device. In one embodiment, the historical cursor data may be stored in the controller. The cursor starting position may refer to an initial stopping position of a cursor of the external device on the screen, and the cursor starting position may be any position where the cursor stops on the screen.
In an embodiment, the controller may extract historical cursor data stored locally by the target device, and may load a cursor start position of a display screen corresponding to the target device according to the historical cursor data. After the controller extracts the historical cursor data stored in the local by the target device, the cursor stopping position corresponding to the external device when the target device acquires the operation right of the external device last time can be read, and the controller can control the cursor starting position displayed on the target device by the external device to be the stopping position of the cursor when the operation right of the external device is acquired last time.
In an embodiment, after determining the target device in the candidate device set according to the positive face coefficient of each face image, the method further includes: initializing the cursor starting position of the external equipment to the central position of the display screen corresponding to the target equipment.
In an embodiment, when the controller does not locally have historical cursor data of the target device, a cursor start position of the external device may be initialized, and the cursor start position may be initialized to a default position after the cursor start position is initialized, where the default position may be preset. In actual operation, the default position may be set as the center position of the screen. That is, when the controller does not locally have the historical cursor data of the target device, the cursor start position of the external device may be initialized, so that the cursor start position is automatically moved to the center position of the display screen corresponding to the target device.
In an embodiment, after determining the target device in the candidate device set according to the positive face coefficient of each face image, the method further includes: and sending a starting instruction to a status indicator lamp of the target equipment so as to enable the status indicator lamp to be lightened.
The status indicator light can be used for indicating whether the target device acquires the operation permission, and in the embodiment, the status indicator light can display different statuses according to different control statuses of the target device. For example, when the target device obtains the control right of the external device, the status indicator lamp of the target device may be in an on state; when the target device does not acquire the control right of the external device, the indicator light of the target device may be in an off state. In an actual operation process, the status indicator lamps and the candidate devices may be in a one-to-one correspondence relationship, that is, each candidate device is connected with one status indicator lamp, and after the candidate device is determined as the target device, the corresponding status indicator lamp may display a corresponding change in status.
In an embodiment, after the controller determines the target device according to the front face coefficient and the candidate device set of each face image, the controller may control the status indicator lamp corresponding to the target device to change the display status. In an actual operation process, the controller can send a starting instruction to the status indicator lamp of the target device, and the status indicator lamp adjusts the display state to be a lighting state according to the starting instruction sent by the controller.
In an embodiment, fig. 2 is a flowchart of a method for controlling operation authority according to an embodiment of the present application, and the present embodiment is a further refinement of the method for controlling operation authority based on the above embodiment. As shown in fig. 2, the method includes:
s210, sending an image acquisition instruction to each image acquisition device so that each image acquisition device can acquire a face image of the target object.
Wherein the image acquisition instruction may be an instruction to instruct the image acquisition device to start acquiring the image. In an embodiment, the image capturing instruction may be in the form of a binary number to cause the image capturing apparatus to perform the act of capturing the image of the face of the target object. The time of sending the image acquisition instruction may be the time of starting the system, and it is understood that the controller may send the image acquisition instruction to each image acquisition device while the system is started. In one embodiment, the controller may send an image capturing command to each image capturing device simultaneously; alternatively, the controller may sequentially send an image capturing instruction to each image capturing device in order. In an embodiment, the controller may send an image capturing instruction to each image capturing device immediately after the system is started, so that each image capturing device performs facial image capturing to the target object.
And S220, sequentially acquiring the facial images of the target object acquired by each image acquisition device in a timing polling mode.
The timed polling may refer to a process of sequentially querying according to a preset duration set in advance. The timing may be understood as a fixed time of polling, and the preset time period may be pre-configured in the controller by the manufacturer according to experience. In an embodiment, the controller may poll each image capturing device every preset time period to acquire the facial image of the target object captured by each image capturing device.
In an embodiment, the controller may sequentially acquire the facial image of the target object acquired by each image acquisition device, and the acquisition may be performed in a timed polling manner, where the timed polling may be performed according to a preset order of the image acquisition devices. That is, the controller may poll each image capturing device at regular time according to a preset sequence of image capturing devices, and sequentially acquire the facial image of the target object captured by each image capturing device, where the preset time period may not be limited, and may be set by a manufacturer according to experience, and in order to ensure a response speed, the preset time period may be set to a millisecond-level time period. For example, the preset time period stored by the controller may include, but is not limited to, 5 milliseconds, 10 milliseconds, 100 milliseconds, and the like, and the preset time period may also be a specific time range.
And S230, determining a front face coefficient of each facial image to obtain a front face coefficient set.
In an embodiment, each face image may be calculated with a positive face coefficient, and the positive face coefficient may be determined according to the left face area and the right face area. The calculation method of the area value of the left side face area and the area value of the right side face area may not be limited, for example, the pixel points of the left side face area and the right side face area may be quantized, that is, the number of the pixel points in the area is calculated, and the number of the pixel points in the area is used as the quantization value of the area. The calculation method of the face correction coefficient value can adopt that the ratio of the area of the left face region to the area of the right face region subtracts 1, the absolute value of the calculated value is taken to ensure that the face correction coefficient value is a positive number, and after the face correction coefficient of each face image is calculated, the face correction coefficients can be summarized into a face correction coefficient set.
In one embodiment, S230, including S2301-S2305:
s2301, an initial face region in each face image is identified and extracted.
The initial face region may be a face region extracted from a face image of a target object according to a face contour, and may be a closed-loop region of a face boundary, which may include each organ of a face. In one embodiment, the initial face region may be obtained by filtering out the background region based on the face image.
In an embodiment, the face image may include an initial face region and a background region excluding the initial face region, and the controller may recognize the initial face region, form a face contour, and extract the initial face region based on the face contour.
And S2302, determining a left-right face boundary line according to the pre-acquired eye distance center position and the mouth center position.
The eye distance refers to the distance between the center points of the two eyes, and the center position of the eye distance may refer to the center position between the center points of the left eye and the right eye; the mouth center position may refer to left and right mouth corner center point positions. In an actual operation process, a two-dimensional coordinate system can be established on the face image, and the eye distance center position and the mouth center position can be determined according to the coordinate position of the coordinate system. The order of determining the eye distance center position and the mouth center position may not be limited. For example, a two-dimensional coordinate system may be established to identify the eye distance center position and the mouth center position with the bottom edge of the face image as the horizontal axis and the left edge of the face image as the vertical axis. The left and right face boundary may be a boundary dividing the left and right faces, and may include a central axis of the face, and the method of confirming the left and right face boundary may not be limited.
In an embodiment, the eye distance center position may be determined by a left eye center position and a right eye center position, and the mouth center position may be confirmed by a left mouth angle position and a right mouth angle position. The controller can establish a two-dimensional coordinate system on the face image for acquiring a left eye center position, a right eye center position, a left mouth angle position and a right mouth angle position, determine a position center point between the two eye center positions as an eye distance center position through the left eye center position and the right eye center position, and determine a mouth center position through the position center point between the left mouth angle position and the right mouth angle position. In one embodiment, the center of the eye distance and the center of the mouth may be connected and extended to define the line of demarcation between the left and right faces.
And S2303, segmenting the initial face area according to the left and right face boundary lines to obtain a corresponding left face area and a corresponding right face area.
The left face area may be a face area on the left side after the initial face area is segmented according to the left and right face boundary lines; the right face region may refer to a right face region after the initial face region is divided according to the left and right face boundaries.
In an embodiment, the initial face area is segmented according to left and right face boundary lines, and corresponding left and right face areas can be obtained.
In one embodiment, S2303 includes: determining a first acquisition boundary line of an initial face area according to the central positions of two eyes; determining a second acquisition boundary line of the initial face area according to the positions of the two mouth angles; intercepting an effective face area in the initial face area according to the first acquisition boundary line and the second acquisition boundary line; and segmenting the effective face area according to the left and right face boundary lines to obtain a corresponding left face area and a corresponding right face area.
The first acquired boundary line may refer to a first boundary line of the valid face region. In an embodiment, the first acquisition boundary line may be determined according to a center position of both eyes. The second acquisition boundary line may refer to a second boundary line of the effective face area, and in an embodiment, the second acquisition boundary line may be determined according to two mouth angle positions. The valid face region may be an actual detection region of the left face region area and the right face region area. In actual operation, the valid face area may be cut according to the first collection boundary line and the second collection boundary line.
In an embodiment, in order to reduce the influence of the face-independent factors (such as hair, beard, etc.) on the calculation of the face coefficients, the effective face region in the initial face region needs to be intercepted, so as to calculate the face coefficients more reasonably. The effective face area may be set by the manufacturer according to experience, and may include a face area below the center of the two eyes and above the two corners of the mouth. The initial face area can be intercepted by a preset acquisition boundary line, and the preset acquisition boundary line can comprise a first acquisition boundary line and a second acquisition boundary line. The first acquisition boundary line can be an upper boundary line for intercepting the effective face area, can be determined according to the central positions of the two eyes, and can be used for connecting the central point of the left eye and the central point of the right eye and prolonging the central points to be used as the first acquisition boundary line; the second acquisition boundary line may be a lower boundary line for intercepting the effective face region, may be determined according to the positions of the two mouth corners, may connect and extend the left mouth corner point and the right mouth corner point to serve as a second acquisition boundary line, and intercepts the effective face region in the initial face region according to the first acquisition boundary line and the second acquisition boundary line. The effective face area can be segmented according to the left and right face boundary. For example, the left-right face boundary may be generated by connecting a center point between the center points of the left and right eyes and a center point between the center points of the left and right mouth corners, and the effective face area is divided according to the left-right face boundary to obtain corresponding left-side face area and right-side face area.
And S2304, determining a front face coefficient of the corresponding face image according to the area of the left side face region and the area of the right side face region.
In the embodiment, the manner of confirming the left side face area and the right side face area may not be limited. For example, the left face area and the right face area may be quantized according to the pixel points of the left face area and the right face area. In the actual operation process, the number of pixels in the left side face area and the right side face area can be calculated respectively, and the number of pixels in the areas is used as the quantized values of the left side face area and the right side face area. In an embodiment, a positive face coefficient of a corresponding face image may be determined according to an area of a left face region and an area of a right face region, and the method for calculating the value of the positive face coefficient may include subtracting 1 from a ratio of the area of the left face region to the area of the right face region, and taking an absolute value of the calculated value to ensure that the value of the positive face coefficient is a positive number, so as to determine the positive face coefficient of each face image.
S2305, the front face coefficients of each face image are combined into a corresponding front face coefficient set.
In an embodiment, each face image may calculate a corresponding face coefficient, and the face coefficients of each face image may be aggregated into a corresponding set of face coefficients.
S240, screening the positive face coefficient set according to the preset positive face coefficient threshold value to obtain a corresponding effective positive face coefficient set.
The positive face coefficient threshold may be a critical value for distinguishing valid positive face coefficients. The value of the positive face coefficient threshold value may not be limited, and may be set by a manufacturer according to experience, and may be determined as valid or invalid. The effective positive face coefficient set may refer to a set of effective positive face coefficients screened according to a positive face coefficient threshold value. The effective face coefficients may refer to face coefficients when an effective face region exists in the face image. In an embodiment, when a human face exists in a face image, the corresponding face front coefficient may be considered to be valid; when a human face does not exist in the face image, the corresponding front face coefficient may be considered invalid.
In an embodiment, the positive face coefficient threshold may be configured in advance for screening effective positive face coefficients, and a value of the positive face coefficient threshold may not be limited. Illustratively, the positive face coefficient threshold value may include 0, -1, etc. It may be set that the face correction coefficient is a valid face correction coefficient when the face correction coefficient is greater than the face correction coefficient threshold value. And screening effective positive face coefficients for the positive face coefficient set according to a preset positive face coefficient threshold value, comparing the positive face coefficients in the positive face coefficient set with the positive face coefficient threshold value one by one, when the positive face coefficients are larger than the threshold value, considering that the positive face coefficients are effective, and after all effective positive face coefficients are obtained, summarizing the effective positive face coefficients into the positive face coefficient set.
And S250, determining target equipment in the candidate equipment set according to the effective front face coefficient set.
In an embodiment, the front face coefficient may correspond to an image acquisition device, the effective image acquisition device may be screened according to effective front face coefficient set screening, the image acquisition device and the candidate device may be in one-to-one correspondence, the candidate device set may be screened according to the effective image acquisition device to obtain a corresponding effective candidate device set, and the target device may be determined in the effective candidate device set according to the effective front face coefficient set. There are various ways to determine the target device, and for example, the valid candidate device corresponding to the valid front-face coefficient with the smallest value may be preset as the target device. The positive face coefficient values in the effective positive face coefficient set are different, the effective positive face coefficient with the minimum value can be obtained by comparing the positive face coefficient values in the effective positive face coefficient set, and the candidate device corresponding to the effective positive face coefficient with the minimum value is determined as the target device.
In one embodiment, determining the target device from the set of valid positive-face coefficients in the set of candidate devices comprises: determining a corresponding effective image acquisition device set according to the effective front face coefficient; screening the candidate equipment set according to the expansion connecting lines corresponding to each effective image acquisition device in the effective image acquisition device set to obtain a corresponding effective equipment set; and determining corresponding target equipment according to the front face coefficient corresponding to each equipment in the effective equipment set.
The type of the expansion connection line may not be limited, as long as the controller can operate and expand the candidate device, for example, the expansion connection line may include a Universal Serial Bus (USB) or the like.
In an embodiment, a corresponding set of effective image acquisition devices may be determined according to the set of effective frontal face coefficients, the effective frontal face coefficients may be in one-to-one correspondence with the facial images acquired by the image acquisition devices, and the effective image acquisition devices may be determined according to the facial images corresponding to the determined effective frontal face coefficients. Wherein, effective image acquisition device can summarize into effective image acquisition device set. The image acquisition devices and the candidate equipment can be in one-to-one correspondence, the image acquisition devices and the candidate equipment can be connected through expansion connecting lines, the candidate equipment set can be screened according to the expansion connecting lines corresponding to each effective image acquisition device in the effective image acquisition device set, a corresponding effective equipment set is obtained, each equipment in the effective equipment set can correspond to one effective positive face coefficient, the effective positive face coefficient with the minimum value can be obtained by judging the value of each effective positive face coefficient, and the effective candidate equipment corresponding to the effective positive face coefficient with the minimum value is determined as target equipment.
And S260, automatically switching the operation authority of the external equipment to the target equipment so as to carry out operation control on the target equipment through the external equipment.
According to the embodiment of the application, the face images of the target object collected by each image collecting device are sequentially obtained in a timing polling mode, the orientation states of the target object to different candidate devices are collected in real time, the positive face coefficient sets are screened according to the preset positive face coefficient threshold values, effective positive face coefficients can be obtained, competition of control authorities of non-effective candidate devices is eliminated, the target devices are determined according to the effective positive face coefficient sets and the candidate device sets, the target devices can be judged more accurately, accuracy of control operation authorities is improved, and use experience of users is improved.
In an embodiment, fig. 3 is a flowchart of a method for controlling operation authority according to an embodiment of the present application. The present embodiment is a specific description of a control method of an operation authority, based on the above embodiments, by taking an image capturing device as an example. As shown in fig. 3, the method includes:
and S310, each camera acquires a face image of the target object.
The controller controls each expansion camera to acquire the facial image of the target object by using a timed task polling mode (to ensure real-time response speed, a timed task is set at millisecond-level time).
And S320, calculating the face correction coefficient of each face image.
The cameras transmit the acquired head portraits of the users to the controller, and the controller firstly identifies initial face information in the face images according to the face images of the target objects acquired by the cameras and further calculates face frontal coefficients.
And S330, determining effective candidate equipment.
After the front face coefficients of the face images collected by all the cameras are calculated, if the front face coefficients calculated aiming at the collected face images are invalid cameras, the corresponding candidate equipment does not participate in the competition of the control right; otherwise, if the face correction coefficient calculated from the acquired face image is an effective camera, the corresponding candidate device may be used as an effective candidate device. From the face-positive coefficient calculated therefrom, it is possible to determine whether or not the candidate device is a valid candidate device.
S340, determining whether the effective candidate equipment is the target equipment, and if so, entering S370; if not, the process proceeds to S350.
And S350, the candidate equipment status indicator lamp is turned off.
And the status indicator lamp corresponding to each candidate device which does not acquire the control right is turned off. And for the candidate equipment which does not obtain the control right, setting the corresponding indicator lamp to be in a non-focus state, namely a turning-off state (prompting the user computer to lose the control authority).
And S360, recording cursor information of the candidate equipment. And storing the cursor information of the current control equipment (initializing the cursor starting position when the control right is acquired subsequently).
And S370, lighting the candidate equipment state indicator lamp. And turning on the status indicator lamp corresponding to the candidate device acquiring the control right, and setting the status indicator lamp corresponding to the candidate device acquiring the control right to be in a focus state, namely a lighting state (prompting a user which computer acquires the control right).
S380, judging whether historical cursor information exists or not, and if so, entering S390; if not, the cursor information is initialized (e.g., the cursor is initialized to the middle of the screen).
And S390, loading historical cursor information. And querying historical cursor data of the corresponding candidate equipment, and if the historical cursor data exists, loading historical cursor information.
In an embodiment, fig. 4 is a schematic view of a face orientation provided according to an embodiment of the present application. As shown in fig. 4, when the face image of the target object acquired by the image acquisition apparatus is any one of those shown in fig. 4, the calculated front face coefficient is a valid front face coefficient. It should be noted that the face orientations may include, but are not limited to, five face orientations in fig. 4, as long as valid face regions can be identified.
The principle of calculation of the face coefficients is as follows: the human face is observed at different angles in the same horizontal direction, the left and right face areas of the human face are approximately equal when the human face is observed from the front side, the left and right face areas of the human face are larger when the human face is observed from the side, and the difference between the left and right face areas of the human face is more and more obvious along with the increase of the deviation measuring angle (relative to the front angle). Based on the principle, in the polling timing task, the image acquisition devices acquire different head portrait picture data of the user at the same time, and the calculated face correction coefficients are different.
In an embodiment, fig. 5 is a schematic view of a face image segmentation provided according to an embodiment of the present application. As shown in fig. 5, the face image may be segmented into the following regions:
according to the facial image 116 of the target object collected by the image collecting device, firstly, the face information in the image is extracted, the face boundary line 110 is identified, and the closed loop area of the face boundary is formed. Positions of left eyes 101 and right eyes 102 in the face information are acquired, a left eye center position point 103 and a right eye center position point 104 are confirmed, and the left eye center position point 103 and the right eye center position point 104 are connected and used as an extension line, and the extension line is used as a first acquisition boundary line. The left mouth corner point 106 and the right mouth corner point 107 in the face information are obtained, the left mouth corner point 106 and the right mouth corner point 107 are connected and are used as extension lines, a central position point between the left eye central position point 103 and the right eye central position point 104 is used as an eye distance central position point 105, a central position point between the left mouth corner point 106 and the right mouth corner point 107 is used as a mouth left and right central position point 108, the eye distance central position point 105 and the mouth left and right central position point 108 are connected and are used as left and right face dividing lines 109, and the left and right face dividing lines 109 can penetrate through a nose 113. An effective face area is determined based on the face boundary line 110, the first acquisition boundary line 117 and the second acquisition boundary line 118, the face boundary line between the left-hand intersections of the face boundary line 110 of the first acquisition boundary line 117 and the second acquisition boundary line 118 being the left-hand boundary line 111, and the face boundary line between the right-hand intersections of the face boundary line 110 of the first acquisition boundary line 117 and the second acquisition boundary line 118 being the right-hand boundary line 112. A region surrounded by the first collection boundary 117, the second collection boundary 118, the left face boundary 111, and the left and right face dividing line 109 is a left face region 115; the region surrounded by the first collection boundary 117, the second collection boundary 118, the right face boundary 112, and the left and right face dividing line 109 is the right face region 114.
In an embodiment, fig. 6 is a flowchart of a method for calculating a front face coefficient according to an embodiment of the present application. This embodiment is to explain the calculation process of the front face coefficient on the basis of the above embodiment. As shown in fig. 6, the method includes:
s4010, extracting face information in the image according to the face image of the target object acquired by the image acquisition device.
S4020, identifying whether a human face exists in the acquired image, and if so, entering S4030; if not, S4130 is entered.
And S4030, identifying an initial face region according to the collected face image.
S4040, identifying the center positions of the two eyes. The center positions of the two eyes are identified in the closed loop area of the human face boundary. Illustratively, the coordinates of the center positions of the left and right eyes are assumed to be (x) in the two-dimensional plane el ,y el ),(x er ,y er )。
S4050, calculating the center position of the eye distance. Calculating the center position (x) of the eye distance according to the center positions of the left eye and the right eye of the human face em ,y em ) The calculation formula is as follows:
x em =(x el +x er )/2
y em =(y el +y er )/2
s4060, identifying the left mouth angle position and the right mouth angle position. A left mouth angle and a right mouth angle are identified in a closed loop region of a face boundary. Illustratively, the left mouth angle sum is assumed in a two-dimensional planeThe coordinates of the right mouth angle are respectively (x) ml ,y ml ),(x mr ,y mr )。
S4070, calculating the center position of the mouth corner. Calculating the center point (x) of the mouth angle from the coordinates of the left mouth angle and the right mouth angle mm ,y mm ) The calculation formula is as follows:
x mm =(x ml +x mr )/2
y mm =(y ml +y mr )/2
s4080, calculating a left face and right face boundary. In an embodiment, the calculated eye distance center point (x) may be em ,y em ) And the calculated center point (x) of the mouth angle mm ,y mm ) The connection is made as the boundary between the left face and the right face.
And S4090, dividing a left face area and a right face area.
In order to reduce the influence of irrelevant factors (such as uneven left and right of human hair or beard) on the calculation of the face coefficient, the face area can be divided in the vertical direction, and the left eye center point and the right eye center point of the face are connected and extended to be used as the upper boundary line (namely, the first acquisition boundary line) of the face acquisition. Similarly, the left mouth corner point and the right mouth corner point of the face are connected and extended to serve as the lower boundary line (i.e. the second collection boundary line) for face collection. Then, the area between the upper boundary line of face acquisition and the lower boundary line of face acquisition is taken as an effective face area, and the effective face area can be set as a region face . Obtaining effective face region face Then, the effective region of the human face can be divided according to the boundary of the left face and the right face face Segmentation into left face region regions l-face And right side face region r-face
S4100, calculating a left side face area and a right side face area. Respectively calculating the left side face region regions l-face Area of (1) l-face And right side face area r-face Area of (1) r-face
In an embodiment, the manner of calculating the left and right face areas may not be limited, and for example, the values of the left face area and the right face area may be quantized using pixel points, that is, the number of pixel points in the left face area and the right face area is counted, and the number of pixel points in the left face area and the right face area is used as the quantization value of the corresponding area.
S4110, calculating a positive face coefficient.
Wherein the face front coefficient may be set to γ.
For example, 1 may be subtracted from the ratio of the area of the left face region to the area of the right face region, and then the absolute value may be taken as the positive face coefficient γ (the absolute value is taken to ensure that the positive face coefficient obtained under normal conditions is a non-negative value, so as to distinguish the positive face coefficient-1 set when no human face exists in the image). In order to prevent abnormal occurrence when the denominator is 0 during calculation, when the denominator is 0 (for example, only the side image of the half face can be seen), the denominator may be a positive number whose left limit approaches to 0 indefinitely, so as to ensure that the calculation can be performed normally. The formula for calculating the front face coefficient γ is as follows:
Figure BDA0003816074630000201
or
Figure BDA0003816074630000202
S4120, the face correction coefficient is returned to the controller.
S4130, returning the front face coefficient to-1, and may preset the front face coefficient to-1 when there is no human face in the image, and returning the front face coefficient to the controller.
In an embodiment, fig. 7 is a schematic structural diagram of a control device for operation authority according to an embodiment of the present application. As shown in fig. 7, the apparatus includes: an image acquisition module 71, a device determination module 72, and an authority switching module 73.
The image acquiring module 71 is configured to acquire a face image of the target object acquired by each image acquiring device.
A device determination module 72 that determines a target device among the candidate device set according to the positive face coefficient of each face image; the candidate device set at least comprises two candidate devices, the face coefficient is used for determining the candidate device oriented to the target object, and the candidate device oriented to the target object is determined as the target device.
And the permission switching module 73 is configured to switch an operation target of the external device to the target device, so as to perform operation control on the target device through the external device.
According to the embodiment of the application, the facial image of the target object is collected through the image acquisition module, the corresponding face coefficient is confirmed according to the collected facial image through the equipment determination module, the candidate equipment can be screened according to the face coefficient to obtain the corresponding target equipment, the operation authority of the external equipment is automatically switched to the target equipment through the authority switching module, the target equipment is subjected to operation control through the external equipment, the operation authority of the external equipment is automatically switched according to the face orientation of the target object, the convenience of switching the operation authority of the external equipment is improved, and the use experience of a user is improved.
In one embodiment, the image acquisition module 71 includes:
and the instruction sending unit is used for sending an image acquisition instruction to each image acquisition device so that each image acquisition device acquires a face image of the target object.
And the image acquisition unit is used for sequentially acquiring the facial images of the target object acquired by each image acquisition device in a timing polling mode.
In one embodiment, the device determination module 72 includes:
and the positive face coefficient acquisition unit is used for determining the positive face coefficient of each face image to obtain a positive face coefficient set.
And the positive face coefficient screening unit is used for screening the positive face coefficient set according to a preset positive face coefficient threshold value to obtain a corresponding effective positive face coefficient set.
And the device confirming unit is used for confirming the target device in the candidate device set according to the effective positive face coefficient set.
In one embodiment, the front face coefficient acquiring unit includes:
and the face recognition unit is used for recognizing and extracting the initial face area in each face image.
And the boundary line confirmation unit is used for determining a boundary line between the left face and the right face according to the pre-acquired eye distance center position and the mouth center position.
And the region dividing unit is used for dividing the initial face region according to the left and right face boundary lines to obtain a corresponding left face region and a corresponding right face region.
And a front face coefficient confirming unit for confirming the front face coefficient of the corresponding face image according to the area of the left side face area and the area of the right side face area.
And the set establishing unit is used for forming the front face coefficients of each face image into a corresponding front face coefficient set.
In one embodiment, the region division unit includes:
and the first boundary line acquisition unit is used for determining a first acquisition boundary line of the initial face area according to the central positions of the two eyes.
And the second boundary line acquisition unit is used for determining a second acquisition boundary line of the initial face area according to the two mouth angle positions.
And the region intercepting unit is used for intercepting the effective face region in the initial face region according to the first acquisition boundary line and the second acquisition boundary line.
And the region separation unit is used for dividing the effective human face region according to the left and right face boundary lines to obtain a corresponding left face region and a corresponding right face region.
In one embodiment, the device confirmation unit includes:
and the device confirming unit is used for confirming the corresponding effective image acquisition device set according to the effective frontal face coefficient.
And the equipment confirming unit is used for screening the candidate equipment set according to the expansion connecting lines corresponding to each effective image acquisition device in the effective image acquisition device set to obtain a corresponding effective equipment set.
And the target equipment confirming unit is used for confirming the corresponding target equipment according to the front face coefficient corresponding to each equipment in the effective equipment set.
In one embodiment, after the target device is determined in the candidate device set based on the positive face coefficient of each face image, the control means of the operation authority further includes:
and the cursor data acquisition unit is used for acquiring historical cursor data of the target equipment.
And the cursor position confirming unit is used for confirming the cursor starting position of the external equipment on the display screen corresponding to the target equipment according to the historical cursor data.
In one embodiment, after determining the target device in the candidate device set based on the positive face coefficient of each face image, the control means of the operation authority further includes:
and the cursor initialization unit is used for initializing the cursor starting position of the external equipment to the central position of the display screen corresponding to the target equipment by the cursor.
In one embodiment, after determining the target device in the candidate device set based on the positive face coefficient of each face image, the control means of the operation authority further includes:
and the indicator lamp lighting unit is used for sending a starting instruction to the status indicator lamp of the target equipment so as to light the status indicator lamp.
The control device of the operation authority provided by the embodiment of the application can execute the control method of the operation authority provided by any embodiment of the application, and has corresponding functional modules and beneficial effects of the execution method.
In an embodiment, fig. 8 is a schematic structural diagram of a control system of an operation authority according to an embodiment of the present application, where the control system of the operation authority includes: an external device 81, a controller 82, at least two image capturing devices 83 and at least two candidate devices 84.
Wherein, the external device 81 is electrically connected with the controller 82; the controller 82 is electrically connected to each candidate device 84 and each image capturing apparatus 83 through an extended connection line 85; the candidate devices 84 are in one-to-one correspondence with the image pickup device 83.
The controller 82 acquires the face image of the target object acquired by each image acquisition device 83, determines a target device from the face coefficient of each face image and the candidate device 84, and automatically switches the operation authority of the external device to the target device to perform operation control on the target device through the external device 81.
The external device 81 is configured to control a target device, and the target device may be determined according to the positive face coefficient of each face image and the candidate device 84.
And a controller 82, configured to acquire the face image of the target object acquired by each image acquisition device 83, determine a target device according to the face coefficient of each face image and the candidate device 84, and automatically switch the operation authority of the external device 81 to the target device.
And an image pickup device 83 for picking up a face image of the target object.
Candidate device 84 for providing a selection device for the screening of target devices.
An extended connection 85 is used for the connection of the controller 82 to each candidate device 84 and to each image acquisition device 83.
The expansion connection line 85 may refer to a connection line through which the controller 82 performs control expansion on the candidate device 84, and may exemplarily include a serial universal bus and the like.
In the embodiment, the controller 82 is electrically connected to each candidate device 84 and each image capturing device 83 through the extended connection line 85, the external device 81 is electrically connected to the controller 82, and the candidate devices 84 correspond to the image capturing devices 83 one to one. The controller 82 may acquire the face image of the target object acquired by each image acquisition device 83, determine the target device according to the face coefficient of each face image and the candidate device 84, and automatically switch the operation authority of the external device 81 to the target device, and the external device 81 controls the target device.
In an embodiment, fig. 9 is a schematic structural diagram of a control system of an operation authority according to an embodiment of the present application, where the control system of an operation authority further includes: at least two status indicator lights 86.
Wherein, the controller 82 is electrically connected with each status indicator lamp 86 through an expansion connecting wire 85; the candidate devices 84 are in one-to-one correspondence with the status indicators 86.
The controller 82 sends an on command to the status indicator light 86 of the target device to cause the status indicator light 86 to illuminate.
In an embodiment, the status indicator light 86 is operable to illuminate after the target device is determined from the positive face coefficient and the set of candidate devices 84 for each facial image, for indicating the corresponding target device.
In an embodiment, fig. 10 is a diagram of an example of a control system for operating authorities according to an embodiment of the present application. An example of the control system for operation authority is specifically described by taking candidate device 84 as computer 841, 5 in number, image acquisition device 83 as camera 831, 5 in number for camera 831 and status indicator lamp 86, and external device 81 as keyboard 811 and mouse 812. As shown in fig. 10, the control system of the operation authority includes: keyboard 811, mouse 812, controller 82, 5 cameras 831, 5 computers 841, and 5 status indicator lights 86.
Wherein, the keyboard 811 and the mouse 812 are electrically connected with the controller 82; the controller 82 is electrically connected to each computer 841, each camera 831, and each status indicator lamp 86 through an extension connection wire 85; the computer 841 corresponds to the camera 831 and the status indicator lamp 86 one to one.
The controller 82 may acquire a facial image of the target object acquired by each camera 831, determine the target device according to the face coefficient of each facial image and the computer 841, automatically switch the operation authority of the external device to the target device to perform operation control on the target device through the keyboard 811 and the mouse 812, and simultaneously send a turn-on instruction to the status indicator lamp 86 of the target device by the controller 82 to turn on the status indicator lamp 86.
In one embodiment, the keyboard 811 and the mouse 812 may be connected to the controller 82 of the apparatus, and the controller 82 is extended through an interface and respectively connected to different computers 841 (only 5 computers 841 are shown in the diagram, and the expansion may be performed according to actual situations in practical use). In the extension connection line 85 from the controller 82 to the computers 841, a control line (for example, USB or the like) for operating each computer 841 is included, and in the extension connection line 85 connecting each computer 841, a connection line of the camera 831, a connection line of the status indicator lamp 86 is also included. The controller 82 performs competition for expanding the control right of the connection line 85 in a timed task polling manner (millisecond-level time interval setting may be performed to ensure response speed), and in each polling, each expansion camera 831 collects the head portrait information of the user, and then sends the collected images to the controller 82. The controller 82 may identify face information in the image according to the user head portrait information collected by the camera 831, and perform face front coefficient calculation according to the face information. And selecting the expansion connecting line 85 for acquiring the control right according to the face correction coefficient of the head portrait picture of the user acquired by each camera 831. And the status indicator lamp 86 corresponding to the expansion connecting line 85 acquiring the control right is turned on, meanwhile, the focus data of the historical mouse of the corresponding computer 841 is inquired, if the focus data exists, the focus data is loaded, and the mouse is initialized to the position of the focus data. If not, the mouse position may be initialized to the center position of the screen. The status indicator lights 86 corresponding to the extended connection lines 85 for which the control right is not acquired are turned off, and the current focus of the mouse is stored (initialization of the mouse position is performed when the control right is acquired later).
In one embodiment, when the leftmost computer 841 is a target device in fig. 10, the status indicator lamp 86 corresponding to the computer 841 may change the status to a focus status, that is, the status indicator lamp 86 lights up, while querying the historical cursor starting position of the mouse 812 of the computer 841, if the historical cursor starting position of the mouse 812 exists, loading, and loading the cursor starting position of the mouse 812 as the position of the historical cursor data; if the historical mouse 812 cursor activation location does not exist, the mouse 812 cursor activation location is initialized to the center position of the screen. The status indicator lights corresponding to the remaining 4 computers 841 can change the status to a non-focus status, i.e. the corresponding status indicator light 86 is turned off, and the current mouse 812 cursor starting position is stored. The computer 841 corresponding to the status indicator lamp 86 in the lighted state can acquire the manipulation authority of the keyboard 811 and the mouse 812.
By the device, a user can select the computer 841 to be controlled by the action of turning the head. The computer 841 that currently acquires the operation right is determined by the status indicator lamp 86. When the user operates using 811 and the mouse 812, the computer 841 that acquires the focus accepts the user's input. On the contrary, the computer 841 which does not acquire the focus shields the user's keyboard 811 and mouse 812 operations.
In an embodiment, fig. 11 is a schematic structural diagram of an electronic device 10 implementing a method for controlling an operation authority according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 11, the electronic device 10 includes at least one processor 11, and a memory communicatively connected to the at least one processor 11, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, and the like, wherein the memory stores a computer program executable by the at least one processor, and the processor 11 can perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from a storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data necessary for the operation of the electronic apparatus 10 can also be stored. The processor 11, the ROM 12, and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to the bus 14.
A number of components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, or the like; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The processor 11 executes the respective methods and processes described above, for example, a control method of an operation authority.
In some embodiments, a method of controlling operation authority may be implemented as a computer program tangibly embodied in a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. One or more steps of a method of controlling the operation authority described above may be performed when the computer program is loaded into the RAM 13 and executed by the processor 11. Alternatively, in other embodiments, the processor 11 may be configured in any other suitable way (e.g. by means of firmware) to execute a control method of the operation right.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for implementing the methods of the present application may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on a machine, as a stand-alone software package partly on a machine and partly on a remote machine or entirely on a remote machine or server.
In the context of this application, a computer readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the Internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above, reordering, adding or deleting steps, may be used. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions of the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (13)

1. A method for controlling operation authority, comprising:
acquiring a face image of a target object acquired by each image acquisition device;
determining target equipment in a candidate equipment set according to the positive face coefficient of each face image; wherein the set of candidate devices includes at least two candidate devices, the front face coefficient is used to determine the candidate device oriented to the target object, and the candidate device oriented to the target object is determined as the target device;
and switching an operation target of the external equipment to the target equipment so as to carry out operation control on the target equipment through the external equipment.
2. The method of claim 1, further comprising, after said determining a target device in a set of candidate devices from the positive face coefficient for each of the facial images:
acquiring historical cursor data of the target device;
and determining the cursor starting position of the external device on the display screen corresponding to the target device according to the historical cursor data.
3. The method of claim 1, further comprising, after said determining a target device among a set of candidate devices from the positive face coefficient of each of said face images:
and initializing the cursor starting position of the external equipment to the central position of the display screen corresponding to the target equipment.
4. The method of claim 1, further comprising, after said determining a target device in a set of candidate devices from the positive face coefficient for each of the facial images:
and sending a starting instruction to a status indicator lamp of the target equipment so as to enable the status indicator lamp to be lightened.
5. The method according to any one of claims 1-4, wherein said acquiring the facial image of the target object captured by each image capture device comprises:
sending an image acquisition instruction to each image acquisition device so that each image acquisition device acquires a facial image of a target object;
and sequentially acquiring the facial images of the target object acquired by each image acquisition device in a timing polling mode.
6. The method according to any one of claims 1-4, wherein said determining a target device from the positive face coefficient and the set of candidate devices for each of said facial images comprises:
determining a front face coefficient of each face image to obtain a front face coefficient set;
screening the positive face coefficient set according to a preset positive face coefficient threshold value to obtain a corresponding effective positive face coefficient set;
and determining target equipment in the candidate equipment set according to the effective front face coefficient set.
7. The method of claim 6, wherein said determining a positive face coefficient for each of said facial images resulting in a set of positive face coefficients comprises:
identifying and extracting an initial face region in each face image;
determining a left-right face boundary according to the pre-acquired eye distance central position and the mouth central position;
dividing the initial face area according to the left and right face dividing lines to obtain a corresponding left face area and a corresponding right face area;
determining a face correction coefficient of a corresponding face image according to the area of the left side face region and the area of the right side face region;
and forming the positive face coefficient of each face image into a corresponding positive face coefficient set.
8. The method of claim 7, wherein the segmenting the initial face region according to the left and right face delimiters to obtain corresponding left and right face regions comprises:
determining a first acquisition boundary line of the initial face area according to the central positions of the two eyes;
determining a second acquisition boundary line of the initial face area according to the positions of the two mouth angles;
intercepting an effective face area in the initial face area according to the first acquisition boundary line and the second acquisition boundary line;
and dividing the effective face area according to the left and right face dividing lines to obtain a corresponding left face area and a corresponding right face area.
9. The method of claim 6, wherein determining a target device from the set of significant positive face coefficients and the set of candidate devices comprises:
determining a corresponding effective image acquisition device set according to the effective front face coefficient;
screening the candidate equipment set according to the expansion connecting lines corresponding to each effective image acquisition device in the effective image acquisition device set to obtain a corresponding effective equipment set;
and determining corresponding target equipment according to the front face coefficient corresponding to each equipment in the effective equipment set.
10. A system for controlling operation authority, comprising: the system comprises external equipment, a controller, at least two image acquisition devices and at least two candidate devices; the external equipment is electrically connected with the controller; the controller is electrically connected with each candidate device and each image acquisition device through an expanded connecting line; the candidate equipment corresponds to the image acquisition devices one by one;
the controller acquires a face image of a target object acquired by each image acquisition device, determines target equipment from candidate equipment according to a face coefficient of each face image, wherein the face coefficient is used for determining the candidate equipment oriented to the target object, determines the candidate equipment oriented to the target object as the target equipment, and switches an operation target of the external equipment to the target equipment so as to perform operation control on the target equipment through the external equipment.
11. The system of claim 10, wherein the control system of the operation authority further comprises: at least two status indicator lights; the controller is electrically connected with each status indicator lamp through an expanded connecting wire; the candidate equipment corresponds to the status indicator lamps one by one;
the controller sends a starting instruction to a status indicator lamp of the target device so as to enable the status indicator lamp to be lightened.
12. An electronic device, characterized in that the electronic device comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the method of controlling operational privilege of any one of claims 1-9.
13. A computer-readable storage medium, characterized in that it stores computer instructions for causing a processor to implement, when executed, a method of controlling operation authority according to any one of claims 1-9.
CN202211027290.XA 2022-08-25 2022-08-25 Control method and control system of operation authority, electronic device and storage medium Pending CN115454237A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211027290.XA CN115454237A (en) 2022-08-25 2022-08-25 Control method and control system of operation authority, electronic device and storage medium
PCT/CN2023/071308 WO2024040861A1 (en) 2022-08-25 2023-01-09 Operation permission control method and system, and electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211027290.XA CN115454237A (en) 2022-08-25 2022-08-25 Control method and control system of operation authority, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN115454237A true CN115454237A (en) 2022-12-09

Family

ID=84298723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211027290.XA Pending CN115454237A (en) 2022-08-25 2022-08-25 Control method and control system of operation authority, electronic device and storage medium

Country Status (2)

Country Link
CN (1) CN115454237A (en)
WO (1) WO2024040861A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024040861A1 (en) * 2022-08-25 2024-02-29 湖北星纪魅族科技有限公司 Operation permission control method and system, and electronic device and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015050545A1 (en) * 2013-10-03 2015-04-09 Thomson Licensing Improved multi-screen management
CN108919982B (en) * 2018-06-14 2020-10-20 北京理工大学 Automatic keyboard-mouse switching method based on face orientation recognition
CN109803450A (en) * 2018-12-12 2019-05-24 平安科技(深圳)有限公司 Wireless device and computer connection method, electronic device and storage medium
CN112214103A (en) * 2020-08-28 2021-01-12 深圳市修远文化创意有限公司 Method for connecting wireless control equipment with computer and related device
CN115454237A (en) * 2022-08-25 2022-12-09 湖北星纪时代科技有限公司 Control method and control system of operation authority, electronic device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024040861A1 (en) * 2022-08-25 2024-02-29 湖北星纪魅族科技有限公司 Operation permission control method and system, and electronic device and storage medium

Also Published As

Publication number Publication date
WO2024040861A1 (en) 2024-02-29

Similar Documents

Publication Publication Date Title
US9953506B2 (en) Alarming method and device
US20200201361A1 (en) Unmanned aerial vehicle control method and device and obstacle notification method and device
US10440284B2 (en) Determination of exposure time for an image frame
KR102488563B1 (en) Apparatus and Method for Processing Differential Beauty Effect
US10503375B2 (en) Display control method, terminal and storage medium
CN108701365B (en) Light spot identification method, device and system
CN110796157A (en) Image difference identification method and device and storage medium
US20140176430A1 (en) Intelligent switching system, electronic device thereof, and method thereof
CN113628239B (en) Display optimization method, related device and computer program product
CN112541400B (en) Behavior recognition method and device based on sight estimation, electronic equipment and storage medium
CN105898561B (en) A kind of method of video image processing and device
CN108665510B (en) Rendering method and device of continuous shooting image, storage medium and terminal
US20150138076A1 (en) Communication device and method of processing incoming call by facial image
CN115454237A (en) Control method and control system of operation authority, electronic device and storage medium
CN105306819A (en) Gesture-based photographing control method and device
KR20220157485A (en) Detection result output method, electronic device and medium
WO2017152592A1 (en) Mobile terminal application operation method and mobile terminal
CN110191369A (en) Image interception method, apparatus, equipment and storage medium
CN112446251A (en) Image processing method and related device
CN109104573B (en) Method for determining focusing point and terminal equipment
CN104378576B (en) Information processing method and electronic equipment
CN116737290B (en) Finger joint knocking event identification method and electronic equipment
US10409464B2 (en) Providing a context related view with a wearable apparatus
CN114063962A (en) Image display method, device, terminal and storage medium
CN108540726B (en) Method and device for processing continuous shooting image, storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 430050 No. b1337, chuanggu startup area, taizihu cultural Digital Creative Industry Park, No. 18, Shenlong Avenue, Wuhan Economic and Technological Development Zone, Hubei Province

Applicant after: Hubei Xingji Meizu Technology Co.,Ltd.

Address before: 430050 No. b1337, chuanggu startup area, taizihu cultural Digital Creative Industry Park, No. 18, Shenlong Avenue, Wuhan Economic and Technological Development Zone, Hubei Province

Applicant before: Hubei Xingji times Technology Co.,Ltd.

CB02 Change of applicant information