CN111400693B - Method and device for unlocking target object, electronic equipment and readable medium - Google Patents

Method and device for unlocking target object, electronic equipment and readable medium Download PDF

Info

Publication number
CN111400693B
CN111400693B CN202010193363.7A CN202010193363A CN111400693B CN 111400693 B CN111400693 B CN 111400693B CN 202010193363 A CN202010193363 A CN 202010193363A CN 111400693 B CN111400693 B CN 111400693B
Authority
CN
China
Prior art keywords
picture
unlocking
locking
user
matching degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010193363.7A
Other languages
Chinese (zh)
Other versions
CN111400693A (en
Inventor
谢飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202010193363.7A priority Critical patent/CN111400693B/en
Publication of CN111400693A publication Critical patent/CN111400693A/en
Application granted granted Critical
Publication of CN111400693B publication Critical patent/CN111400693B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/36User authentication by graphic or iconic representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the disclosure provides a method, a device, electronic equipment and a readable medium for unlocking a target object, which comprise the following steps: receiving an unlocking picture aiming at a target object and/or an unlocking gesture aiming at the target object, which are input by a user; acquiring a locking picture and/or a locking gesture of a target object preset by a user; determining a first matching degree of the decoded picture and the locked picture, and/or a second matching degree of the decoding gesture and the locked gesture; and when the first matching degree and/or the second matching degree meet the preset conditions, determining that the target object is unlocked. The target object is locked/unlocked in a picture and gesture mode, and the locking/unlocking mode is increased, so that the locking/unlocking mode is more diversified; furthermore, as the unlocking result is determined based on various matching degrees, the locking/unlocking safety can be improved, and the user experience is further improved.

Description

Method and device for unlocking target object, electronic equipment and readable medium
Technical Field
The disclosure relates to the technical field of computers, and in particular relates to a method, a device, electronic equipment and a readable medium for unlocking a target object.
Background
Along with the popularization of intelligent terminals, people store more and more important information in the intelligent terminals, and information which is generally required to be protected can have the function of locking/unlocking so as to achieve the purpose of guaranteeing the safety of the information. In the prior art, the locking/unlocking is usually realized by adopting a password composed of a plurality of digits or based on the connection of a nine-square-lattice path, but the implementation mode of locking/unlocking in the prior art is not personalized enough, and the password composed of a plurality of digits or based on the connection of the nine-square-lattice path is limited, so that the security is poor and the locking/unlocking is easy to crack.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, an embodiment of the present disclosure provides a method for unlocking a target object, the method including:
receiving an unlocking picture aiming at a target object and/or an unlocking gesture aiming at the target object, which are input by a user;
Acquiring a locking picture and/or a locking gesture of a target object preset by a user;
determining a first matching degree of the decoded picture and the locked picture, and/or a second matching degree of the decoding gesture and the locked gesture;
And when the first matching degree and/or the second matching degree meet the preset conditions, determining that the target object is unlocked.
In a second aspect, an embodiment of the present disclosure provides an apparatus for unlocking a target object, including:
The unlocking information receiving device is used for receiving an unlocking picture aiming at the target object and/or an unlocking gesture aiming at the target object, which are input by a user;
the locking information acquisition device is used for acquiring a locking picture and/or a locking gesture of a target object preset by a user;
The matching degree determining device is used for determining a first matching degree of the decoded picture and the locked picture and/or a second matching degree of the decoding gesture and the locked gesture;
And the unlocking result determining device is used for determining that the target object is unlocked when the first matching degree and/or the second matching degree meet the preset conditions.
In a third aspect, the present disclosure provides an electronic device comprising a processor and a memory;
A memory for storing computer operating instructions;
A processor for executing the method as shown in the first aspect of the embodiments of the present disclosure by invoking computer operation instructions.
In a fourth aspect, the present disclosure provides a computer readable medium storing at least one instruction, at least one program, code set, or instruction set, the at least one instruction, at least one program, code set, or instruction set being loaded and executed by a processor to implement a method as shown in the first aspect of the embodiments of the present disclosure.
The technical scheme provided by the embodiment of the disclosure has the beneficial effects that:
In the embodiment of the disclosure, a locking picture and/or a locking gesture can be preset for a target object, when the target object is unlocked, the matching degree of the unlocking picture and the locking picture and/or the matching degree of the decoding gesture and the locking gesture needs to be determined, and when the matching degree meets a set condition, the target object can be unlocked. That is, in the embodiment of the present disclosure, the target object is locked/unlocked by adopting a picture and a gesture, and the locking/unlocking form is increased, so that the locking/unlocking modes are more diversified; furthermore, as the unlocking result is determined based on the matching degree between the unlocking picture and the locking picture and the matching degree between the decoding gesture and the locking gesture, compared with the prior art, the unlocking method can improve the security of locking/unlocking and further improve the user experience only according to a digital or nine-grid mode.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a flow chart of a method for unlocking a target object in an embodiment of the disclosure;
Fig. 2 is a schematic diagram of a terminal device and a rotation angle in an embodiment of the disclosure;
FIG. 3 is a schematic structural diagram of a target unlocking device according to an embodiment of the disclosure;
fig. 4 is a schematic structural diagram of an electronic device in an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are used merely to distinguish one from another device, module, or unit, and are not intended to limit the device, module, or unit to the particular device, module, or unit or to limit the order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Embodiments of the present disclosure provide a method for unlocking a target object, as shown in fig. 1, the method may include:
step S110, receiving an unlocking picture aiming at a target object and/or an unlocking gesture aiming at the target object, which are input by a user;
In practical application, the specific type of the target object is not limited in the embodiment of the disclosure, and the target object may be, for example, an application program installed in a terminal device, a user interface of the application program, the terminal device itself, a file or a picture stored in the device, or the like.
When the user wants to open or apply the target object, the user is prompted to input corresponding unlocking information, which in the embodiment of the present disclosure is an unlocking picture and/or an unlocking gesture, because of the object that is locked when the target object is opened.
Step S120, obtaining a locking picture and/or a locking gesture of a target object preset by a user.
In practical application, when an unlocking picture and/or an unlocking gesture input by a user are received, a preset locking picture and/or a preset locking gesture of the target object can be obtained, and then an unlocking result corresponding to the target object can be determined according to the unlocking picture and the set locking picture, and/or the unlocking gesture and the set locking gesture input by the user. Specifically, when a user opens a target object or starts an application, a locking picture and/or a locking gesture of the target object, that is, a preset picture or gesture for locking the target object, may be obtained based on an object identifier of the target object to be unlocked.
The unlocking result of the target object includes two cases, one is that the target object passes through the unlocking process, and the other is that the target object does not pass through the unlocking process. It will be appreciated that the instructions may allow the user to access the target object when the unlock result is unlock pass, and may not allow the user to access the target object when the unlock result is unlock fail.
Step S130, determining a first matching degree of the decoded picture and the locked picture, and/or a second matching degree of the decoded gesture and the locked gesture.
Step S140, when the first matching degree and/or the second matching degree meet the preset conditions, the target object is determined to pass through unlocking.
In practical application, when determining the unlocking result of the target object, a first matching degree between the decoded picture and the locked picture input by the user and/or a second matching degree between the decoded gesture and the locked gesture can be determined, and whether the obtained first matching degree and/or second matching degree meets a preset condition or not is determined. Correspondingly, when the obtained matching degree meets the preset condition, the determined unlocking result is that the target object is unlocked, namely, the user can access the target object; when the obtained matching degree does not meet the preset condition, the determined unlocking result is that the target object is not unlocked, namely the user cannot access the target object.
The specific implementation manner of determining the first matching degree between the decoded picture and the locked picture is not limited in the embodiments of the present disclosure. For example, the similarity between the decoded picture and the locked picture may be directly calculated, wherein the higher the similarity is, the higher the first matching degree between the decoded picture and the locked picture is, and the lower the similarity is, the lower the first matching degree between the decoded picture and the locked picture is. Further, it may be determined whether the first matching degree satisfies a preset condition.
In an alternative embodiment of the present disclosure, the first matching degree may include an object matching degree between an object in the unlock picture and an object in a corresponding position in the lock picture, and the preset condition includes the object matching degree being greater than a first threshold.
The objects in the locked picture or the unlocked picture may be characters, animals, plants, characters, and the like, which are included in the locked picture or the unlocked picture, and the embodiment of the disclosure is not limited thereto.
In practical application, if the unlocking picture and/or the locking picture include the object, when determining the first matching, in addition to directly calculating the similarity between the decoding picture and the locking picture, the object matching degree between the object in the unlocking picture and the object at the corresponding position in the locking picture may be determined, and the preset condition may be set such that the object matching degree is greater than the first threshold. That is, only when the determined object matching degree is greater than the first threshold value, it may be determined that the first matching satisfies the preset condition.
It is understood that the object in the unlock picture and the object in the corresponding position in the lock picture refer to the matching degree of the object in the region corresponding to the unlock picture and the lock picture. For example, if the unlock picture and the lock picture each include a plurality of objects, the object matching degree may refer to a matching degree between an object in a left area in the unlock picture and an object in a left area in the lock picture, and the like. It can be understood that if the unlocking picture and the locking picture only include one object, the determined object matching degree may be the matching degree between the two objects existing in the unlocking picture and the locking picture; accordingly, if only one of the pictures (the locked picture or the unlocked picture) includes the object and the other picture (the locked picture or the unlocked picture) does not include the object, the object matching degree can be not determined.
In an optional embodiment of the disclosure, the first matching degree further includes a position matching degree of an object in the unlocking picture and an object in a corresponding position in the locking picture in the unlocking picture, and the preset condition further includes that the position matching degree is greater than a second threshold.
In practical application, after determining the matching degree of the object in the unlocking picture and the object in the corresponding position in the locking picture, the matching degree of the position of the object in the unlocking picture and the matching degree of the position of the object in the locking picture in the unlocking picture can be further determined, and the preset condition can be set so that the matching degree of the position is larger than a second threshold. That is, only when the determined object matching degree is greater than the second threshold value, it is determined that the unlocking result satisfying the preset condition is unlocking pass; otherwise, if the determined object matching degree is not greater than the second threshold, the determined unlocking result does not meet the preset condition, and the unlocking is failed. In determining the position matching degree, for example, the matching degree between the region occupied by the object in the left region of the unlocking picture and the region occupied by the object in the left region of the locking picture may be determined.
For the specific implementation manner of identifying the objects included in the unlocked picture and the locked picture, reference may be made to the existing medium image identification technology, for example, OCR (Optical Character Recognition ) technology may be used to obtain the text objects in the picture.
In practical application, if the objects included in the unlocking picture and the locking picture are identified as a plurality of characters, at this time, an area formed by the plurality of characters can be used as an object area, and when determining the matching degree of the objects between the unlocking picture and the locking picture, whether the content of the characters in the object area is similar or not and whether the ordering of different character time is the same or not can be determined.
In the disclosed embodiments, the method is performed by a terminal device, and the unlocking gesture and the locking gesture are characterized by a rotation angle of the terminal device with a set direction.
In practical applications, the method provided by the embodiment of the present disclosure may be performed by the terminal device, and a gyroscope may be included in the terminal device, and a gesture for locking and unlocking the target object may be represented by a rotation angle between the terminal device and the setting party.
The user may pre-configure an unlocking gesture of the target object, where the unlocking gesture may be a rotation angle of the terminal device relative to the set direction, for example, may be that the terminal is rotated 45 degrees to the right in the vertical direction; correspondingly, when receiving a user opening target object opening operation or application starting, the terminal equipment can be rotated for a certain angle from a set starting position, and after a set duration is kept, a gyroscope in the terminal equipment can calculate the rotation angle between the current terminal equipment and a set party, and the calculated angle is used as an input unlocking gesture; further, whether the calculated angle is consistent with the configured angle representing the unlocking gesture (namely, determining the second matching degree of the decoding gesture and the locking gesture), if so, determining that the second matching degree meets the preset condition, otherwise, determining that the second matching degree does not meet the preset condition.
As shown in fig. 2, in this example, assuming that the three-dimensional space is decomposed into 8 calibration spaces based on the three-axis angles (x, y, z), when the terminal device is at the position shown in fig. 2 (i.e., the start position), the angle between the current and the set directions of the terminal device is 0 degrees, and when the terminal device rotates based on any one of the x-axis, the y-axis, or the z-axis, the gyroscope may calculate the rotation angle of the terminal device with respect to the set directions at this time, and may represent the rotation angle using the three-axis angles (x, y, z).
Further, in the practical application, the target object may set a plurality of unlocking gestures, where each unlocking gesture has a corresponding sequential relationship; correspondingly, when the user wants to unlock the target object, the user needs to sequentially input a plurality of unlocking gestures (i.e. sequentially rotate the terminal device from the initial position by a certain angle and keep a set time length), a gyroscope in the terminal device sequentially determines the angle corresponding to each rotation, matches the configured plurality of unlocking gestures according to the input sequence, obtains a second matching degree, and determines whether the second matching degree meets a preset condition. The determining party of the second matching degree and the manner that the second matching degree meets the condition may be preconfigured, for example, when a plurality of locking postures are configured, the matching degree of each locking posture and the unlocking posture may be determined at this time, and the manner that the second matching degree meets the condition may be that each locking posture is equal to the unlocking posture at this time.
In addition, in practical applications, the user may sometimes feel that inputting the unlocking gesture is relatively cumbersome, and thus, in the embodiment of the present disclosure, other characters (letters or numbers, etc.) corresponding to the locking gesture may also be preconfigured, such as the locking gesture 1 corresponding to the number 1 and the locking gesture 2 corresponding to the number 2; further, when the user inputs the unlocking gesture, the user can select to replace the unlocking gesture with an input character, for example, the user can input a number 1 to replace the unlocking gesture 1 and input a number 2 to replace the unlocking gesture 2 through a keyboard, further, after receiving the number 1 and the number 2 input by the user, the terminal device respectively compares the number with the number corresponding to each locking gesture to obtain a comparison result, and the comparison result is used as a second matching degree of the decoding gesture and the locking gesture.
It should be noted that, in the embodiment of the present disclosure, the target correspondence may set the locking picture and the locking gesture at the same time, or may set only the locking picture or the locking gesture, which is not limited in the embodiment of the present disclosure; correspondingly, when only setting the locking picture or the locking gesture, the user can only input the unlocking picture or the unlocking gesture when unlocking the target object, and only determine whether the first matching degree or the second matching degree meets the preset condition; when the locking picture or the locking gesture is set at the same time, the user needs to input the unlocking picture and the unlocking gesture when unlocking the target object, and determine whether the first matching degree and the second matching degree both meet the preset condition, wherein when the locking picture or the locking gesture is set at the same time, the order of the user before and after inputting the unlocking picture and the unlocking gesture can be preconfigured, and the embodiment of the disclosure is not limited.
In the embodiment of the disclosure, a locking picture and/or a locking gesture can be preset for a target object, when the target object is unlocked, the matching degree of the unlocking picture and the locking picture and/or the matching degree of the decoding gesture and the locking gesture need to be determined, and when the matching degree meets a set condition, the target object can be unlocked. That is, in the embodiment of the present disclosure, the target object is locked/unlocked by adopting a picture manner and a gesture manner, and the locking/unlocking manner is increased, so that the locking/unlocking manner is more diversified; furthermore, as the unlocking result is determined based on the matching degree between the unlocking picture and the locking picture and the matching degree between the decoding gesture and the locking gesture, compared with the prior art, the unlocking method can improve the security of locking/unlocking and further improve the user experience only according to a digital or nine-grid mode.
In an optional embodiment of the disclosure, receiving an unlock picture for a target object input by a user includes:
after receiving unlocking triggering operation of a user for a target object, displaying an unlocking picture input interface;
And receiving an unlocking picture input by a user through an unlocking picture input interface.
The unlocking trigger operation refers to an action of unlocking the target object by the user, that is, an action of the user wants to input an unlocking picture, and a mode of triggering the unlocking trigger operation may be preconfigured, which is not limited in the embodiments of the present disclosure. For example, when the target object is a certain application program or a certain file installed in the terminal device, the user clicking the target object can be regarded as triggering the unlocking triggering operation for the target object; when the target object is the terminal equipment, the unlocking triggering operation for the target object can be regarded as being triggered when the preset touch action is executed on the terminal equipment. For example, when the user makes a double click in a preset area in the screen of the terminal device, it is regarded as triggering an unlocking trigger operation for the terminal device.
Correspondingly, after receiving an unlocking trigger operation of a user for a target object, an unlocking picture input interface can be displayed, and at the moment, the user can input an unlocking picture based on the displayed unlocking picture input interface.
In practical application, the manner in which the user inputs the unlocking picture based on the displayed unlocking picture input interface may be preconfigured, and the embodiment of the disclosure is not limited, for example, the unlocking picture may be input by any one of the following manners:
And receiving first drawing operation of a user through a first graph drawing area in the unlocking picture input interface, and generating a corresponding unlocking picture based on the first drawing operation.
Specifically, a first graphic drawing area may be set in the unlock picture input interface, and the user may trigger a first drawing operation in the first graphic drawing area. Triggering the first drawing operation mode may mean that the user draws in the first graphic drawing area, such as drawing a line in the first graphic drawing area in a sliding manner; further, when a drawing completion operation is received, a drawing picture is generated according to the first drawing operation, and the drawing picture is used as an unlocking picture. The generating a drawing picture according to the first drawing operation may be preconfigured, and the embodiment of the disclosure is not limited, and for example, when a drawing completion operation is received, a screen capturing may be performed on the first graphics drawing area, and a picture obtained by the screen capturing at this time is used as an unlocking picture.
Wherein, for the user to better draw the picture, the first graphic drawing area can be set into a mode of a drawing tool, which can comprise a plurality of functional buttons, such as a functional button filled with colors, a functional button provided with line colors, and the like; accordingly, when a user clicks a certain function button, the function corresponding to the function button can be started.
Receiving unlocking picture selection triggering operation of a user through a picture selection triggering area of an unlocking picture input interface; displaying an unlocking picture selection interface based on unlocking picture selection triggering operation; when receiving unlocking picture selection operation of a user through an unlocking picture selection interface, taking a picture corresponding to the unlocking picture selection operation as an unlocking picture.
The picture selection triggering area refers to an area which can be used for triggering unlocking picture selection triggering operation in the unlocking picture input interface. In practical application, there is a case that a user does not want to manually draw an unlocking picture by himself, but directly takes a stored picture as the unlocking picture, and at this time, the user can receive unlocking picture selection triggering operation of the user through a picture selection triggering area in an unlocking picture input interface; further, an unlock picture selection interface may be displayed based on an unlock picture selection trigger operation, where each picture that can be used as an unlock picture is displayed in the selection interface, and a classification item may also be displayed, where when a user selects a certain item, each picture included in the item is displayed; correspondingly, when the user selects one picture based on the unlocking picture selection interface, the user is regarded as receiving unlocking picture selection operation of the user, and the picture selected by the user is taken as the unlocking picture input by the user.
The mode that the user triggers the unlocking picture selection triggering operation through the picture selection triggering area of the unlocking picture input interface can be preconfigured, and the embodiment of the disclosure is not limited. For example, a virtual button for triggering the unlock picture selection trigger operation may be set in a picture selection trigger area of the unlock picture input interface, and when the user clicks the virtual button, the unlock picture selection trigger operation is visually triggered, and then the unlock picture selection interface is displayed.
And receiving unlocking picture shooting triggering operation of a user through an image shooting triggering area of the unlocking picture input interface, calling a shooting device based on the unlocking picture shooting triggering operation, and taking a picture shot by the shooting device as an unlocking picture.
The image shooting trigger area refers to an area which can be used for triggering unlocking picture shooting trigger operation in the unlocking picture input interface. In practical application, the user can also call the shooting device by triggering the unlocking picture shooting triggering operation through the image shooting triggering area in the unlocking picture input interface, then shoot pictures through the shooting device, and take the shot pictures directly as unlocking pictures.
The mode that the user triggers the unlocking picture shooting trigger operation through the image shooting trigger area of the unlocking picture input interface can be preconfigured, and the embodiment of the disclosure is not limited. For example, a virtual button for triggering an unlock picture shooting trigger operation may be set in an image shooting trigger area in the unlock picture input interface, when the user clicks the virtual button, the user is regarded as triggering the unlock picture shooting trigger operation, and then the shooting device may be invoked and operated, at this time, the user may shoot a picture based on the operated shooting device, and when a confirmation operation of the user for a certain shot picture is received, the picture is regarded as an unlock picture.
In an optional embodiment of the disclosure, before receiving the unlock picture for the target object input by the user, the method further includes:
displaying a locking picture input interface when a locking picture setting operation of a user aiming at a target object is received;
And receiving the locked picture input by the user through the locked picture input interface.
In practical application, if the user performs unlocking operation on the target object, it is indicated that the target object is provided with the locked picture. One optional way to set the locked picture for the target object is: when a locking picture setting operation of a user for a target object is received, a locking picture input interface is displayed, at this time, the user can input a picture based on the displayed locking picture input interface, and then the picture input by the user is used as a locking picture of the target object.
The operation of setting the locked picture refers to an action of setting the locked picture on the target object by the user, and a mode of triggering the operation of setting the locked picture may be preconfigured, which is not limited in the embodiments of the present disclosure. For example, a virtual button for triggering a locked picture setting triggering operation (the locked picture setting triggering operation refers to an action that a user wants to set a locked picture) may be set in an application interface, when the user clicks the virtual button, the user is regarded as triggering the locked picture setting triggering operation, at this time, a target object selection interface may be displayed, an object in which a locked picture may be set is displayed in the target object selection interface, and when the user selects an object based on the target object selection interface, the user is regarded as triggering the locked picture setting operation for the target object, at this time, the object selected by the user is the target object. Of course, in practical application, when the target object is a certain application program or a certain file installed in the terminal device, a virtual button for triggering the locking picture setting operation may be displayed when the user clicks the target object, and when the user clicks the virtual button, the unlocking triggering operation for the target object may be considered to be triggered.
Further, when a locked picture setting operation of the user for the target object is received, a locked picture input interface may be displayed, and at this time, the user may input the locked picture through the locked picture input interface.
In practical application, the manner in which the user inputs the locked picture based on the displayed locked picture input interface may be preconfigured, and the embodiment of the disclosure is not limited, for example, the locked picture may be input by any one of the following manners:
and receiving a second drawing operation of a user through a second graph drawing area in the locked picture input interface, and generating a corresponding locked picture based on the second drawing operation.
Specifically, a second graphic drawing area may be set in the locked picture input interface, and the user may trigger a second drawing operation in the second graphic drawing area. Triggering the second drawing operation mode may mean that the user draws in the second graphic drawing area; further, when a drawing completion operation is received, a drawing picture is generated according to the second drawing operation, and the drawing picture is taken as a locked picture.
Wherein, in order that the user can better draw the picture, similar to the first graphic drawing area in the above, the second graphic drawing area can also be set as a mode of the drawing tool, namely, can include various functional buttons such as a functional button filled with color, a functional button provided with line color, and the like; accordingly, when a user clicks a certain function button, the function corresponding to the function button can be started.
Receiving locking picture selection triggering operation of a user through a picture selection triggering area of a locking picture input interface; displaying a locking picture selection interface based on locking picture selection triggering operation; when the locking picture selection operation of the user is received through the locking picture selection interface, the picture corresponding to the locking picture selection operation is used as the locking picture.
In practical application, there is a situation that a user does not want to manually draw a locked picture, but directly takes a stored picture as a locked picture, and at this time, the user can receive a locked picture selection triggering operation of the user through a picture selection triggering area in a locked picture input interface; further, a locked picture selection interface may be displayed based on a locked picture selection trigger operation, where each picture that can be used as a locked picture may be displayed in the selection interface, and a classification entry including a picture may also be displayed, and when a user selects a certain entry, each picture included under the entry is displayed; correspondingly, when the user selects one picture based on the locking picture selection interface, the picture is regarded as the locking picture selection operation of the receiving user, and the picture selected by the user is taken as the locking picture input by the user.
The mode that the user triggers the locking picture selection triggering operation through the picture selection triggering area of the locking picture input interface can be preconfigured, and the embodiment of the disclosure is not limited. For example, a virtual button for triggering the locked picture selection trigger operation may be set in a picture selection trigger area of the locked picture input interface, and when the user clicks the virtual button, the locked picture selection trigger operation is visually triggered, and then the locked picture selection interface is displayed.
And receiving locking picture shooting triggering operation of a user through an image shooting triggering area of the locking picture input interface, calling a shooting device based on the locking picture shooting triggering operation, and locking a picture through the shooting device.
In practical application, the user can also call the shooting device by triggering the locking picture shooting triggering operation through the image shooting triggering area in the locking picture input interface, then shoot pictures through the shooting device, and take the shot pictures directly as locking pictures.
The mode that the user triggers the locking picture shooting triggering operation through the image shooting triggering area of the locking picture input interface can be preconfigured, and the embodiment of the disclosure is not limited. For example, a virtual button for triggering the locking plate shooting triggering operation may be set in an image shooting triggering area in the locking picture input interface, when the user clicks the virtual button, the user is regarded as triggering the locking picture shooting triggering operation, then the shooting device may be invoked and operated, at this time, the user may shoot a picture based on the operated shooting device, and when receiving a confirmation operation of the user for a certain shot picture, the picture is taken as the locking picture.
Further, after the locked picture input by the user is received through the locked picture input interface, the identification of the locked picture is associated with the target object, and when the unlocking operation for the target object is received, the locked picture can be determined according to the identification stored in association with the target object.
Based on the same principle as the method shown in fig. 1, there is further provided a target object unlocking apparatus 30 in an embodiment of the present disclosure, and as shown in fig. 3, the target object unlocking apparatus 30 may include an unlocking information receiving module 310, a locking information obtaining module 320, a matching degree determining module 330, and an unlocking result determining module 340, where:
an unlock picture receiving module 310, configured to receive an unlock picture for a target object and/or an unlock gesture for the target object input by a user;
A locking picture obtaining module 320, configured to obtain a locking picture and/or a locking gesture of a target object preset by a user;
a matching degree determining module 330, configured to determine a first matching degree of the decoded picture and the locked picture, and/or a second matching degree of the decoded gesture and the locked gesture;
the unlocking result determining module 340 is configured to determine that the target object is unlocked when it is determined that the first matching degree and/or the second matching degree meet the preset condition.
In an optional embodiment of the disclosure, when receiving an unlock picture for a target object input by a user, the unlock picture receiving module is specifically configured to:
after receiving unlocking triggering operation of a user for a target object, displaying an unlocking picture input interface;
And receiving an unlocking picture input by a user through an unlocking picture input interface.
In an optional embodiment of the disclosure, when the unlock picture receiving module receives an unlock picture for a target object input by a user, the unlock picture receiving module includes any one of the following:
Receiving a first drawing operation of a user through a first graph drawing area in an unlocking picture input interface, and generating an unlocking picture based on the first drawing operation;
Receiving unlocking picture selection triggering operation of a user through a picture selection triggering area of an unlocking picture input interface; displaying an unlocking picture selection interface based on unlocking picture selection triggering operation; when receiving unlocking picture selection operation of a user through an unlocking picture selection interface, taking a picture corresponding to the unlocking picture selection operation as an unlocking picture;
And receiving unlocking picture shooting triggering operation of a user through an image shooting triggering area of the unlocking picture input interface, calling a shooting device based on the unlocking picture shooting triggering operation, and taking a picture shot by the shooting device as an unlocking picture.
In an alternative embodiment of the present disclosure, the first matching degree includes an object matching degree between an object in the unlock picture and an object in a corresponding position in the lock picture, and the preset condition includes the object matching degree being greater than a first threshold.
In an optional embodiment of the disclosure, the first matching degree further includes a position matching degree of an object in the unlocking picture and an object in a corresponding position in the locking picture in the unlocking picture, and the preset condition further includes that the position matching degree is greater than a second threshold.
In an alternative embodiment of the disclosure, the apparatus further comprises a locked picture setting module for:
Before receiving an unlocking picture input by a user and aiming at a target object, if a locking picture setting operation of the user aiming at the target object is received, displaying a locking picture input interface;
And receiving the locked picture input by the user through the locked picture input interface.
In an optional embodiment of the disclosure, the locked picture setting module, when receiving the locked picture input by the user through the locked picture input interface, includes any one of the following:
receiving a second drawing operation of a user through a second graphic drawing area in the locked picture input interface, and generating a locked picture based on the second drawing operation;
receiving a locking picture selection triggering operation of a user through a picture selection triggering area of a locking picture input interface; displaying a locking picture selection interface based on locking picture selection triggering operation; when a locking picture selection operation of a user is received through a locking picture selection interface, taking a picture corresponding to the locking picture selection operation as a locking picture;
and receiving locking picture shooting triggering operation of a user through an image shooting triggering area of the locking picture input interface, calling a shooting device based on the locking picture shooting triggering operation, and locking a picture through the shooting device.
In an alternative embodiment of the present disclosure, the apparatus is comprised in a terminal device, the unlocking gesture and the locking gesture being characterized by a rotation angle of the terminal device with a set direction.
The target object unlocking device according to the embodiments of the present disclosure may perform a target object unlocking method provided by the embodiments of the present disclosure, and similar implementation principles, actions performed by each module in the target object unlocking device according to each embodiment of the present disclosure correspond to steps in the target object unlocking method according to each embodiment of the present disclosure, and detailed functional descriptions of each module of the target object unlocking device may be specifically referred to descriptions in the corresponding target object unlocking method shown in the foregoing, which are not repeated herein.
Based on the same principles as the methods shown in the embodiments of the present disclosure, there is also provided in the embodiments of the present disclosure an electronic device that may include, but is not limited to: a processor and a memory; a memory for storing computer operating instructions; and the processor is used for executing the method shown in the embodiment by calling the computer operation instruction.
Based on the same principle as the method shown in the embodiments of the present disclosure, there is also provided a computer readable storage medium storing at least one instruction, at least one program, a code set, or an instruction set, where the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by a processor to implement the method shown in the above embodiments, which is not repeated herein.
Referring now to fig. 4, a schematic diagram of an electronic device 600 suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 4 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
An electronic device includes: a memory and a processor, where the processor may be referred to as a processing device 601 hereinafter, the memory may include at least one of a Read Only Memory (ROM) 602, a Random Access Memory (RAM) 603, and a storage device 608 hereinafter, as shown in detail below:
As shown in fig. 4, the electronic device 600 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
In general, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, magnetic tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 4 shows an electronic device 600 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 601.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules or units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Where the name of a module or unit does not in some cases constitute a limitation of the unit itself.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, an example A1 provides a method of target object unlocking, comprising:
receiving an unlocking picture aiming at a target object and/or an unlocking gesture aiming at the target object, which are input by a user;
Acquiring a locking picture and/or a locking gesture of a target object preset by a user;
determining a first matching degree of the decoded picture and the locked picture, and/or a second matching degree of the decoding gesture and the locked gesture;
And when the first matching degree and/or the second matching degree meet the preset conditions, determining that the target object is unlocked.
A2, receiving an unlocking picture input by a user and aiming at a target object according to the method of A1, wherein the unlocking picture comprises the following steps:
after receiving unlocking triggering operation of a user for a target object, displaying an unlocking picture input interface;
And receiving an unlocking picture input by a user through an unlocking picture input interface.
A3, receiving an unlocking picture input by a user for the target object through an unlocking picture input interface according to the method of A2, wherein the unlocking picture comprises any one of the following steps:
Receiving a first drawing operation of a user through a first graph drawing area in an unlocking picture input interface, and generating an unlocking picture based on the first drawing operation;
Receiving unlocking picture selection triggering operation of a user through a picture selection triggering area of an unlocking picture input interface; displaying an unlocking picture selection interface based on unlocking picture selection triggering operation; when receiving unlocking picture selection operation of a user through an unlocking picture selection interface, taking a picture corresponding to the unlocking picture selection operation as an unlocking picture;
And receiving unlocking picture shooting triggering operation of a user through an image shooting triggering area of the unlocking picture input interface, calling a shooting device based on the unlocking picture shooting triggering operation, and taking a picture shot by the shooting device as an unlocking picture.
A4, according to the method of A1, the first matching degree comprises the object matching degree between the object in the unlocking picture and the object at the corresponding position in the locking picture, and the preset condition comprises that the object matching degree is larger than a first threshold value.
A5, according to the method of A4, the first matching degree further comprises the position matching degree of the object in the unlocking picture and the position matching degree of the object in the corresponding position in the locking picture in the unlocking picture, and the preset condition further comprises that the position matching degree is larger than a second threshold value.
A6, the method according to any one of A1-A5, before receiving the unlocking picture input by the user for the target object, further comprises:
displaying a locking picture input interface when a locking picture setting operation of a user aiming at a target object is received;
And receiving the locked picture input by the user through the locked picture input interface.
A7, receiving a locked picture input by a user through a locked picture input interface according to the method of A6, wherein the locked picture input interface comprises any one of the following steps:
receiving a second drawing operation of a user through a second graphic drawing area in the locked picture input interface, and generating a locked picture based on the second drawing operation;
receiving a locking picture selection triggering operation of a user through a picture selection triggering area of a locking picture input interface; displaying a locking picture selection interface based on locking picture selection triggering operation; when a locking picture selection operation of a user is received through a locking picture selection interface, taking a picture corresponding to the locking picture selection operation as a locking picture;
and receiving locking picture shooting triggering operation of a user through an image shooting triggering area of the locking picture input interface, calling a shooting device based on the locking picture shooting triggering operation, and locking a picture through the shooting device.
A8, the method according to A1 is executed by the terminal device, and the unlocking gesture and the locking gesture are characterized by the rotation angle of the terminal device and the set direction.
According to one or more embodiments of the present disclosure, an apparatus for unlocking a target object is provided [ example B1 ], including:
The unlocking information receiving device is used for receiving an unlocking picture aiming at the target object and/or an unlocking gesture aiming at the target object, which are input by a user;
the locking information acquisition device is used for acquiring a locking picture and/or a locking gesture of a target object preset by a user;
The matching degree determining device is used for determining a first matching degree of the decoded picture and the locked picture and/or a second matching degree of the decoding gesture and the locked gesture;
And the unlocking result determining device is used for determining that the target object is unlocked when the first matching degree and/or the second matching degree meet the preset conditions.
B2, according to the device of B1, when receiving the unlocking picture input by the user and aiming at the target object, the unlocking picture receiving module is specifically used for:
after receiving unlocking triggering operation of a user for a target object, displaying an unlocking picture input interface;
And receiving an unlocking picture input by a user through an unlocking picture input interface.
B3, according to the device of B2, when receiving the unlocking picture input by the user and aiming at the target object, the unlocking picture receiving module comprises any one of the following:
Receiving a first drawing operation of a user through a first graph drawing area in an unlocking picture input interface, and generating an unlocking picture based on the first drawing operation;
Receiving unlocking picture selection triggering operation of a user through a picture selection triggering area of an unlocking picture input interface; displaying an unlocking picture selection interface based on unlocking picture selection triggering operation; when receiving unlocking picture selection operation of a user through an unlocking picture selection interface, taking a picture corresponding to the unlocking picture selection operation as an unlocking picture;
And receiving unlocking picture shooting triggering operation of a user through an image shooting triggering area of the unlocking picture input interface, calling a shooting device based on the unlocking picture shooting triggering operation, and taking a picture shot by the shooting device as an unlocking picture.
And B4, according to the device of B1, the matching degree comprises the object matching degree between the object in the unlocking picture and the object at the corresponding position in the locking picture, and the preset condition comprises that the object matching degree is larger than a first threshold value.
And B5, according to the device of B4, the matching degree further comprises the matching degree of the position of the object in the unlocking picture and the position of the object in the corresponding position in the locking picture in the unlocking picture, and the preset condition further comprises that the matching degree of the position is larger than a second threshold value.
B6, the apparatus according to any one of B1-B5, the apparatus further comprising a locked picture setting module for:
Before receiving an unlocking picture input by a user and aiming at a target object, if a locking picture setting operation of the user aiming at the target object is received, displaying a locking picture input interface;
And receiving the locked picture input by the user through the locked picture input interface.
B7, according to the device of B6, when the locked picture setting module receives the locked picture input by the user through the locked picture input interface, the locked picture setting module comprises any one of the following:
receiving a second drawing operation of a user through a second graphic drawing area in the locked picture input interface, and generating a locked picture based on the second drawing operation;
receiving a locking picture selection triggering operation of a user through a picture selection triggering area of a locking picture input interface; displaying a locking picture selection interface based on locking picture selection triggering operation; when a locking picture selection operation of a user is received through a locking picture selection interface, taking a picture corresponding to the locking picture selection operation as a locking picture;
and receiving locking picture shooting triggering operation of a user through an image shooting triggering area of the locking picture input interface, calling a shooting device based on the locking picture shooting triggering operation, and locking a picture through the shooting device.
B8, the device according to B1 is included in the terminal equipment, and the unlocking posture and the locking posture are characterized by the rotation angle of the terminal equipment and the set direction.
According to one or more embodiments of the present disclosure, there is provided an electronic device [ example C1 ], characterized by comprising:
A processor and a memory;
A memory for storing computer operating instructions;
a processor for executing the method of any one of A1 to A8 by invoking computer operation instructions.
According to one or more embodiments of the present disclosure, a computer readable medium is provided [ example D1 ], characterized in that the readable medium stores at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by a processor to implement the method of any one of A1 to A8.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (9)

1. A method for unlocking a target object, which is applied to a terminal device, wherein the terminal device comprises a gyroscope, and the method comprises the following steps:
Receiving an unlocking picture aiming at a target object and an unlocking gesture aiming at the target object, which are input by a user, wherein when receiving opening operation or application starting aiming at the target object by the user, the terminal equipment is rotated for a certain angle from a set starting position and kept for a set duration, at the moment, a gyroscope in the terminal equipment can calculate the rotation angle between the current terminal equipment and the set direction, the calculated angle is used as the input unlocking gesture, and the gyroscope sequentially determines the angles corresponding to each rotation according to the input sequence to obtain a plurality of unlocking gestures;
Acquiring a locking picture and a locking gesture of the target object preset by a user;
determining a first matching degree of the unlocking picture and the locking picture and a second matching degree of the unlocking gesture and the locking gesture, wherein whether angles in a plurality of unlocking gestures are matched with angles of a plurality of locking gestures configured according to a sequential relation or not is determined according to an input sequence to obtain the second matching degree;
When the first matching degree and the second matching degree meet the preset conditions, determining that the target object is unlocked;
the first matching degree comprises an object matching degree between an object in the unlocking picture and an object in a corresponding position in the locking picture, and the preset condition comprises that the object matching degree is larger than a first threshold value;
the object matching degree comprises the following steps:
When the object matching degree between the unlocking picture and the locking picture is determined, whether the text contents in the object areas are similar or not and whether the sequences of different text time are the same or not can be determined.
2. The method of claim 1, wherein receiving the unlock picture for the target object entered by the user comprises:
after receiving unlocking triggering operation of a user for a target object, displaying an unlocking picture input interface;
and receiving an unlocking picture input by a user through the unlocking picture input interface.
3. The method according to claim 2, wherein the receiving, through the unlock picture input interface, an unlock picture for a target object input by a user, includes any one of:
receiving a first drawing operation of the user through a first graph drawing area in the unlocking picture input interface, and generating the unlocking picture based on the first drawing operation;
Receiving unlocking picture selection triggering operation of a user through a picture selection triggering area of the unlocking picture input interface; displaying an unlocking picture selection interface based on the unlocking picture selection triggering operation; when receiving unlocking picture selection operation of a user through the unlocking picture selection interface, taking a picture corresponding to the unlocking picture selection operation as the unlocking picture;
And receiving unlocking picture shooting triggering operation of a user through an image shooting triggering area of the unlocking picture input interface, calling a shooting device based on the unlocking picture shooting triggering operation, and taking a picture shot by the shooting device as the unlocking picture.
4. The method of claim 1, wherein the first matching degree further comprises a position matching degree of an object in the unlock picture and an object in a corresponding position in the lock picture in the unlock picture, and the preset condition further comprises a position matching degree being greater than a second threshold.
5. The method according to any one of claims 1-4, wherein before receiving the unlock picture for the target object entered by the user, further comprising:
Displaying a locking picture input interface when a locking picture setting operation of the user for the target object is received;
and receiving the locked picture input by the user through the locked picture input interface.
6. The method of claim 5, wherein the receiving the user-entered locked picture via the locked picture input interface comprises any one of:
Receiving a second drawing operation of the user through a second graphic drawing area in the locked picture input interface, and generating the locked picture based on the second drawing operation;
receiving a locking picture selection triggering operation of a user through a picture selection triggering area of the locking picture input interface; displaying a locking picture selection interface based on the locking picture selection triggering operation; when a locking picture selection operation of a user is received through the locking picture selection interface, taking a picture corresponding to the locking picture selection operation as the locking picture;
And receiving a locking picture shooting trigger operation of a user through an image shooting trigger area of the locking picture input interface, calling a shooting device based on the locking picture shooting trigger operation, and locking a picture through the shooting device.
7. An apparatus for unlocking a target object, the apparatus being applied to a terminal device, the terminal device including a gyroscope therein, the apparatus comprising:
The unlocking information receiving device is used for receiving an unlocking picture aiming at a target object and an unlocking gesture aiming at the target object, which are input by a user, wherein when the unlocking picture aiming at the target object and the unlocking gesture aiming at the target object are received, the terminal equipment is rotated by a certain angle from a set starting position and kept for a set duration, a gyroscope in the terminal equipment can calculate the rotation angle between the current terminal equipment and the set direction at the moment, the calculated angle is used as the input unlocking gesture, and the gyroscope sequentially determines the angles corresponding to each rotation according to the input sequence to obtain a plurality of unlocking gestures;
The locking information acquisition device is used for acquiring a locking picture and a locking gesture of the target object preset by a user;
The matching degree determining device is used for determining a first matching degree of the unlocking picture and the locking picture and a second matching degree of the unlocking gesture and the locking gesture, wherein whether angles in a plurality of unlocking gestures are matched with angles of a plurality of locking gestures configured according to a sequential relation or not is determined according to an input sequence to obtain the second matching degree; the first matching degree comprises an object matching degree between an object in the unlocking picture and an object in a corresponding position in the locking picture; the object matching degree comprises the following steps: recognizing that the objects included in the unlocking picture and the locking picture are a plurality of characters, and taking an area formed by the plurality of characters as an object area, when determining the object matching degree between the unlocking picture and the locking picture, determining whether the character contents in the object area are similar or not and whether the ordering of different character time is the same or not;
And the unlocking result determining device is used for determining that the target object is unlocked when the first matching degree and the second matching degree meet the preset conditions, wherein the preset conditions comprise that the object matching degree is larger than a first threshold value.
8. An electronic device, comprising:
A processor and a memory;
the memory is used for storing computer operation instructions;
the processor is configured to perform the method of any one of claims 1 to 6 by invoking the computer operating instructions.
9. A computer readable medium having stored thereon at least one instruction, at least one program, code set or instruction set, the at least one instruction, the at least one program, code set or instruction set being loaded and executed by a processor to implement the method of any of claims 1 to 6.
CN202010193363.7A 2020-03-18 2020-03-18 Method and device for unlocking target object, electronic equipment and readable medium Active CN111400693B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010193363.7A CN111400693B (en) 2020-03-18 2020-03-18 Method and device for unlocking target object, electronic equipment and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010193363.7A CN111400693B (en) 2020-03-18 2020-03-18 Method and device for unlocking target object, electronic equipment and readable medium

Publications (2)

Publication Number Publication Date
CN111400693A CN111400693A (en) 2020-07-10
CN111400693B true CN111400693B (en) 2024-06-18

Family

ID=71436597

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010193363.7A Active CN111400693B (en) 2020-03-18 2020-03-18 Method and device for unlocking target object, electronic equipment and readable medium

Country Status (1)

Country Link
CN (1) CN111400693B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102368200A (en) * 2011-10-28 2012-03-07 青岛海信移动通信技术股份有限公司 Touch screen unlocking method and electronic product with touch screen
CN102736853A (en) * 2012-05-17 2012-10-17 北京三星通信技术研究有限公司 Screen unlocking method, screen locking method and terminal
CN102929515A (en) * 2012-10-29 2013-02-13 广东欧珀移动通信有限公司 Mobile terminal unlocking method and mobile terminal
CN106096377A (en) * 2016-06-21 2016-11-09 北京奇虎科技有限公司 Application unlocking method, device and the mobile terminal of a kind of mobile terminal
CN106469002A (en) * 2015-08-17 2017-03-01 阿里巴巴集团控股有限公司 A kind of method and apparatus for unblock
CN107368730A (en) * 2017-07-31 2017-11-21 广东欧珀移动通信有限公司 Unlock verification method and device
CN109409071A (en) * 2018-11-13 2019-03-01 湖北文理学院 Unlocking method, device and the electronic equipment of electronic equipment

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101497762B1 (en) * 2012-02-01 2015-03-05 서울시립대학교 산학협력단 Unlocking method, and terminal and recording medium for the same method
CN103294366B (en) * 2012-02-27 2016-04-27 联想(北京)有限公司 A kind of screen unlock method and electronic equipment
CN102880489B (en) * 2012-09-13 2016-10-19 百度在线网络技术(北京)有限公司 The application program launching method of mobile terminal, device and mobile terminal
CN103167143A (en) * 2012-09-20 2013-06-19 深圳市金立通信设备有限公司 Gravity ball unlocking system and method of mobile phone
CN103927106A (en) * 2013-01-14 2014-07-16 富泰华工业(深圳)有限公司 Application program starting system and method
CN103106034A (en) * 2013-02-05 2013-05-15 中标软件有限公司 Unlocking method and unlocking system for electronic device and electronic device screen or electronic device application
CN103116465A (en) * 2013-02-06 2013-05-22 中标软件有限公司 Screen of electronic equipment or applied unlocking method and system
CN104536642B (en) * 2014-12-09 2018-07-27 小米科技有限责任公司 unlocking method and device
CN104573444B (en) * 2015-01-20 2018-01-23 广东欧珀移动通信有限公司 The unlocking method and device of a kind of terminal
CN105260630A (en) * 2015-09-23 2016-01-20 上海与德通讯技术有限公司 Screen unlocking method and unlocking module
CN105224840A (en) * 2015-10-14 2016-01-06 上海斐讯数据通信技术有限公司 A kind of unlock method of mobile terminal, system for unlocking and mobile terminal
CN106909812A (en) * 2015-12-23 2017-06-30 北京奇虎科技有限公司 Terminal unlocking processing method and terminal
CN106488034A (en) * 2016-11-24 2017-03-08 努比亚技术有限公司 A kind of method realizing unlocking and mobile terminal
CN106933349A (en) * 2017-02-06 2017-07-07 歌尔科技有限公司 Unlocking method, device and virtual reality device for virtual reality device
CN107015732B (en) * 2017-04-28 2020-05-05 维沃移动通信有限公司 Interface display method and mobile terminal
CN107346387B (en) * 2017-06-23 2023-10-17 深圳传音通讯有限公司 Unlocking method and device
CN110659475B (en) * 2019-09-17 2021-06-01 珠海格力电器股份有限公司 Unlocking method and terminal equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102368200A (en) * 2011-10-28 2012-03-07 青岛海信移动通信技术股份有限公司 Touch screen unlocking method and electronic product with touch screen
CN102736853A (en) * 2012-05-17 2012-10-17 北京三星通信技术研究有限公司 Screen unlocking method, screen locking method and terminal
CN102929515A (en) * 2012-10-29 2013-02-13 广东欧珀移动通信有限公司 Mobile terminal unlocking method and mobile terminal
CN106469002A (en) * 2015-08-17 2017-03-01 阿里巴巴集团控股有限公司 A kind of method and apparatus for unblock
CN106096377A (en) * 2016-06-21 2016-11-09 北京奇虎科技有限公司 Application unlocking method, device and the mobile terminal of a kind of mobile terminal
CN107368730A (en) * 2017-07-31 2017-11-21 广东欧珀移动通信有限公司 Unlock verification method and device
CN109409071A (en) * 2018-11-13 2019-03-01 湖北文理学院 Unlocking method, device and the electronic equipment of electronic equipment

Also Published As

Publication number Publication date
CN111400693A (en) 2020-07-10

Similar Documents

Publication Publication Date Title
RU2643473C2 (en) Method and tools for fingerprinting identification
CN112804445B (en) Display method and device and electronic equipment
CN110866193B (en) Feedback method, device, equipment and storage medium based on online document comments
CN111786876A (en) Information processing method, device, electronic equipment and computer readable medium
US9697346B2 (en) Method and apparatus for identifying and associating devices using visual recognition
US20220382884A1 (en) Method and device for parsing shared password
US20220392130A1 (en) Image special effect processing method and apparatus
CN111309416B (en) Information display method, device and equipment of application interface and readable medium
CN113807253A (en) Face recognition method and device, electronic equipment and storage medium
CN111400693B (en) Method and device for unlocking target object, electronic equipment and readable medium
CN107066864B (en) Application icon display method and device
CN111583102B (en) Face image processing method and device, electronic equipment and computer storage medium
CN111128115B (en) Information verification method and device, electronic equipment and storage medium
CN110929132B (en) Information interaction method, device, electronic equipment and computer readable storage medium
CN111897474A (en) File processing method and electronic equipment
CN108647502B (en) Mobile terminal and unlocking method and system thereof
CN116596748A (en) Image stylization processing method, apparatus, device, storage medium, and program product
CN112261216B (en) Terminal control method and device, terminal and storage medium
CN110717467A (en) Head pose estimation method, device, equipment and storage medium
CN115082368A (en) Image processing method, device, equipment and storage medium
CN114549983A (en) Computer vision model training method and device, electronic equipment and storage medium
CN113965640A (en) Message processing method and device
CN112765620A (en) Display control method, display control device, electronic device, and medium
CN109104759B (en) Interaction method of electronic equipment, electronic equipment and computer readable medium
CN111797383A (en) Password verification method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230428

Address after: Room 802, Information Building, 13 Linyin North Street, Pinggu District, Beijing, 101299

Applicant after: Beijing youzhuju Network Technology Co.,Ltd.

Address before: No. 715, 7th floor, building 3, 52 Zhongguancun South Street, Haidian District, Beijing 100081

Applicant before: Beijing infinite light field technology Co.,Ltd.

GR01 Patent grant