CN111400693A - Target object unlocking method and device, electronic equipment and readable medium - Google Patents

Target object unlocking method and device, electronic equipment and readable medium Download PDF

Info

Publication number
CN111400693A
CN111400693A CN202010193363.7A CN202010193363A CN111400693A CN 111400693 A CN111400693 A CN 111400693A CN 202010193363 A CN202010193363 A CN 202010193363A CN 111400693 A CN111400693 A CN 111400693A
Authority
CN
China
Prior art keywords
picture
unlocking
user
target object
locking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010193363.7A
Other languages
Chinese (zh)
Inventor
谢飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Infinite Light Field Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Infinite Light Field Technology Co Ltd filed Critical Beijing Infinite Light Field Technology Co Ltd
Priority to CN202010193363.7A priority Critical patent/CN111400693A/en
Publication of CN111400693A publication Critical patent/CN111400693A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/36User authentication by graphic or iconic representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the disclosure provides a method and a device for unlocking a target object, an electronic device and a readable medium, and the method comprises the following steps: receiving an unlocking picture for a target object and/or an unlocking gesture for the target object, which are input by a user; acquiring a locking picture and/or a locking gesture of a target object preset by a user; determining a first matching degree of the decoding picture and the locking picture and/or a second matching degree of the decoding gesture and the locking gesture; and when the first matching degree and/or the second matching degree meet/meets the preset conditions, determining that the target object passes the unlocking. The target object is locked/unlocked in a picture and gesture mode, and the locking/unlocking mode is added, so that the locking/unlocking mode is more diversified; furthermore, as the unlocking result is determined based on various matching degrees, the locking/unlocking safety can be improved, and the user experience is further improved.

Description

Target object unlocking method and device, electronic equipment and readable medium
Technical Field
The disclosure relates to the technical field of computers, in particular to a method and a device for unlocking a target object, electronic equipment and a readable medium.
Background
With the popularization of intelligent terminals, people store more and more important information in the intelligent terminals, and information which needs to be protected generally can have a locking/unlocking function, so that the purpose of guaranteeing the safety of the information is achieved. In the prior art, locking/unlocking is usually realized by adopting a password consisting of a plurality of digits or a connection based on a Sudoku path, but the locking/unlocking realization mode in the prior art is not personalized, and the password consisting of the digits or the connection based on the Sudoku path is limited, has poor safety and is easy to crack.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, an embodiment of the present disclosure provides a method for unlocking a target object, where the method includes:
receiving an unlocking picture for a target object and/or an unlocking gesture for the target object, which are input by a user;
acquiring a locking picture and/or a locking gesture of a target object preset by a user;
determining a first matching degree of the decoding picture and the locking picture and/or a second matching degree of the decoding gesture and the locking gesture;
and when the first matching degree and/or the second matching degree meet/meets the preset conditions, determining that the target object passes the unlocking.
In a second aspect, an embodiment of the present disclosure provides an apparatus for unlocking a target object, including:
the unlocking information receiving device is used for receiving an unlocking picture aiming at the target object and/or an unlocking gesture aiming at the target object, which are input by a user;
the locking information acquisition device is used for acquiring a locking picture and/or a locking gesture of a target object preset by a user;
matching degree determining device, which is used for determining a first matching degree of the decoding picture and the locking picture and/or a second matching degree of the decoding gesture and the locking gesture;
and the unlocking result determining device is used for determining that the target object passes through the unlocking when the first matching degree and/or the second matching degree meet the preset condition.
In a third aspect, the present disclosure provides an electronic device comprising a processor and a memory;
a memory for storing computer operating instructions;
a processor for performing the method as shown in the first aspect of the embodiments of the present disclosure by invoking computer operational instructions.
In a fourth aspect, the present disclosure provides a computer readable medium having stored thereon at least one instruction, at least one program, set of codes or set of instructions, which is loaded and executed by a processor to implement a method as shown in the first aspect of embodiments of the present disclosure.
The technical scheme provided by the embodiment of the disclosure has the following beneficial effects:
in the embodiment of the disclosure, a locked picture and/or a locked gesture may be preset for a target object, and when the target object is unlocked, a matching degree between an unlocked picture and the locked picture and/or a matching degree between the decoding gesture and the locked gesture need to be determined, and when the matching degree meets a set condition, the target object may be unlocked. That is to say, in the embodiment of the present disclosure, the target object is locked/unlocked in the manner of a picture and a gesture, and the locking/unlocking manner is added, so that the locking/unlocking manner is more diversified; furthermore, as the unlocking result is determined based on the matching degree between the unlocking picture and the locking picture and the matching degree between the decoding posture and the locking posture, compared with the prior art which only adopts a digital or squared mode, the locking/unlocking safety can be improved, and the user experience is further improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic flowchart of a method for unlocking a target object according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a terminal device and a rotation angle in an embodiment of the disclosure;
FIG. 3 is a schematic structural diagram of an apparatus for unlocking a target object according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device in an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the devices, modules or units to be determined as different devices, modules or units, and are not used for limiting the sequence or interdependence relationship of the functions executed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
An embodiment of the present disclosure provides a method for unlocking a target object, which may include, as shown in fig. 1:
step S110, receiving an unlocking picture aiming at a target object and/or an unlocking gesture aiming at the target object, which are input by a user;
in practical applications, the specific type of the target object is not limited in the embodiment of the present disclosure, for example, the target object may be an application installed in the terminal device, a user interface of the application, the terminal device itself, a file or a picture stored in the device, or the like.
When the user wants to open or apply the target object, the user is prompted to input corresponding unlocking information because the target object is a locked object.
And step S120, acquiring a locking picture and/or a locking gesture of the target object preset by the user.
In practical application, when an unlocking picture and/or an unlocking gesture input by a user are received, a locking picture and/or a locking gesture preset by a target object can be obtained, and then an unlocking result corresponding to the target object can be determined according to the unlocking picture and the set locking picture input by the user and/or the unlocking gesture and the set locking gesture. Specifically, when receiving an opening operation of opening a target object by a user or starting an application, a locked picture and/or a locked gesture of the target object may be obtained based on an object identifier of the target object to be unlocked, that is, a preset picture or gesture for locking the target object.
The unlocking result of the target object comprises two conditions, wherein one condition is that the unlocking is passed, and the other condition is that the unlocking is not passed. It is understood that the description may allow the user to access the target object when the unlocking result is unlocking pass, and may not allow the user to access the target object when the unlocking result is unlocking not pass.
Step S130, determining a first matching degree between the decoded picture and the locked picture, and/or a second matching degree between the decoding posture and the locking posture.
And step S140, when the first matching degree and/or the second matching degree meet the preset conditions, determining that the target object is unlocked to pass.
In practical application, when determining the unlocking result of the target object, a first matching degree between a decoded picture and a locked picture input by a user and/or a second matching degree between a decoding gesture and a locking gesture may be determined, and whether the obtained first matching degree and/or the obtained second matching degree satisfy a preset condition is determined. Correspondingly, when the obtained matching degree meets the preset condition, the determined unlocking result is that the target object is unlocked to pass, namely the user can access the target object; and when the obtained matching degree does not meet the preset condition, the determined unlocking result is that the target object is not unlocked, namely the user cannot access the target object.
The specific implementation manner of the first matching degree between the decoded picture and the locked picture is determined, which is not limited in the embodiment of the present disclosure. For example, the similarity between the decoded picture and the locked picture may be directly calculated, wherein the first matching degree between the decoded picture and the locked picture is higher when the similarity is higher, and the first matching degree between the decoded picture and the locked picture is lower when the similarity is lower. Further, it may be determined whether the first matching degree satisfies a preset condition.
In an optional embodiment of the present disclosure, the first matching degree may include an object matching degree between an object in the unlocked picture and an object in a corresponding position in the locked picture, and the preset condition includes that the object matching degree is greater than a first threshold.
The objects in the locked picture or the unlocked picture may be characters, animals, plants, characters, and the like included in the locked picture or the unlocked picture, which is not limited in the embodiment of the disclosure.
In practical applications, if the object is included in the unlocked picture and/or the locked picture, when determining the first matching, in addition to directly calculating the similarity between the decoded picture and the locked picture, the object matching degree between the object in the unlocked picture and the object at the corresponding position in the locked picture may be determined, and the preset condition may be set that the object matching degree is greater than the first threshold. That is, it may be determined that the first match satisfies the preset condition only when the determined object matching degree is greater than the first threshold.
It can be understood that the object in the unlock picture and the object in the corresponding position in the lock picture refer to the matching degree of the objects in the corresponding areas of the unlock picture and the lock picture. For example, if a plurality of objects are included in both the unlock picture and the lock picture, the object matching degree may refer to a matching degree between an object in a left region in the unlock picture and an object in a left region in the lock picture. It can be understood that, if the unlocking picture and the locking picture only include one object, the determined object matching degree may be a matching degree between two objects existing in the unlocking picture and the locking picture; correspondingly, if only one of the pictures (the locked picture or the unlocked picture) includes the object, and the other picture (the locked picture or the unlocked picture) does not include the object, the object matching degree does not need to be determined at this time.
In an optional embodiment of the present disclosure, the first matching degree further includes a position matching degree of an object in the unlocked picture and an object in a corresponding position in the locked picture in the unlocked picture, and the preset condition further includes that the position matching degree is greater than a second threshold.
In practical applications, after determining the object matching degree between the object in the unlocked picture and the object at the corresponding position in the locked picture, the position matching degree between the position of the object in the unlocked picture and the position of the object at the corresponding position in the locked picture in the unlocked picture may be further determined, and the preset condition may be set that the position matching degree is greater than the second threshold. That is, only when the determined object matching degree is greater than the second threshold, it may be determined that the unlocking result satisfying the preset condition is that the unlocking is passed; otherwise, if the determined object matching degree is not greater than the second threshold, the determined unlocking result does not meet the preset condition and is that the unlocking is not passed. When the position matching degree is determined, for example, the matching degree between the area occupied by the object in the left area of the unlocked picture and the area occupied by the object in the left area of the locked picture may be determined.
For the specific implementation manner for recognizing the objects included in the unlocking picture and the locking picture, reference may be made to an existing image Recognition technology, for example, an OCR (Optical Character Recognition) technology may be used to obtain the text object in the picture.
In practical application, if it is recognized that the objects included in the unlocking picture and the locking picture are multiple characters, at this time, an area formed by the multiple characters can be used as one object area, and when the object matching degree between the unlocking picture and the locking picture is determined, whether the character contents in the object area are similar or not and whether the sequence of different character time is the same or not can be determined.
In the disclosed embodiment, the method is performed by a terminal device, and the unlocking posture and the locking posture are characterized by the rotation angle of the terminal device and the set direction.
In practical applications, the method provided by the embodiment of the present disclosure may be performed by a terminal device, and a gyroscope may be included in the terminal device, and a gesture for locking and unlocking the target object may be represented by a rotation angle between the terminal device and a setting party.
The user may pre-configure an unlocking gesture of the target object, where the unlocking gesture may be a rotation angle of the terminal device relative to a set direction, for example, the terminal may be rotated 45 degrees to the right in a vertical direction; correspondingly, when receiving the opening operation of the target object opened by the user or starting the application, the terminal equipment can be rotated by a certain angle from the set initial position, and after keeping the set time length, the gyroscope in the terminal equipment can calculate the rotation angle between the current terminal equipment and the set party, and the calculated angle is used as the input unlocking gesture; further, it may be determined whether the calculated angle is consistent with the configured angle characterizing the unlocking gesture (i.e., it is determined whether a second matching degree between the decoding gesture and the locking gesture is consistent), and if so, it may be determined that the second matching degree satisfies a preset condition, otherwise, it is determined that the second matching degree does not satisfy the preset condition.
As shown in fig. 2, in this example, assuming that the three-dimensional space is decomposed into 8 calibration spaces based on three-axis angles (x, y, z), when the terminal device is at the position shown in fig. 2 (i.e., the start position), the angle between the current terminal device and the set direction is 0 degree, and when the terminal device is rotated based on any one of the x-axis, the y-axis, or the z-axis, the gyroscope may calculate the rotation angle of the terminal device with respect to the set direction at this time, and may represent the rotation angle by using the three-axis angles (x, y, z).
Further, in practical application, the target object may set a plurality of unlocking gestures, and each unlocking gesture has a corresponding sequential relationship; correspondingly, when a user wants to unlock the target object, the user needs to sequentially input a plurality of unlocking postures (namely, sequentially rotate the terminal device by a certain angle from the initial position and keep the preset time duration), the gyroscope in the terminal device sequentially determines the angle corresponding to each rotation, matches the plurality of unlocking postures according to the input sequence to obtain a second matching degree, and determines whether the second matching degree meets the preset condition. The determination party of the second matching degree and the manner in which the second matching degree satisfies the condition may be configured in advance, for example, when a plurality of locking postures are configured, the matching degree of each locking posture and the unlocking posture may be determined at this time, and the manner in which the second matching degree satisfies the condition may be that each locking posture and the unlocking posture are equal at this time.
In addition, in practical applications, a user may sometimes feel that it is relatively troublesome to input an unlocking gesture, and therefore, in the embodiment of the present disclosure, other characters (letters, numbers, or the like) corresponding to a locking gesture may also be configured in advance, such as that a locking gesture 1 corresponds to a number 1, and a locking gesture 2 corresponds to a number 2; further, when the user inputs the unlocking gesture, the user may choose to substitute the input character, for example, the user may input a number 1 instead of the unlocking gesture 1 and a number 2 instead of the unlocking gesture 2 through the keyboard, and further, after the terminal device receives the number 1 and the number 2 input by the user, the numbers corresponding to each locking gesture are respectively compared to obtain a comparison result, and the comparison result is used as a second matching degree between the decoding gesture and the locking gesture.
It should be noted that, in the embodiment of the present disclosure, the target correspondence may set the locking picture and the locking gesture at the same time, or may set only the locking picture or the locking gesture, which is not limited in the embodiment of the present disclosure; correspondingly, when only a locking picture or a locking gesture is set, when a user unlocks the target object, only an unlocking picture or an unlocking gesture can be input, and only whether the first matching degree or the second matching degree meets the preset condition is determined; when the locked picture or the locked gesture is set simultaneously, the user needs to input the unlocked picture and the unlocked gesture when unlocking the target object, and determines whether the first matching degree and the second matching degree both meet the preset condition, wherein when the locked picture or the locked gesture is set simultaneously, the sequence of the user before and after inputting the unlocked picture and the unlocked gesture can be configured in advance, which is not limited in the embodiment of the disclosure.
In the embodiment of the disclosure, a locking picture and/or a locking gesture may be preset for a target object, and when the target object is unlocked, a matching degree between an unlocking picture and the locking picture and/or a matching degree between a decoding gesture and the locking gesture need to be determined, and when the matching degree meets a set condition, the target object may be unlocked. That is to say, in the embodiment of the present disclosure, a picture mode and a gesture mode are adopted to lock/unlock the target object, and a locking/unlocking mode is added, so that the locking/unlocking mode is more diversified; furthermore, as the unlocking result is determined based on the matching degree between the unlocking picture and the locking picture and the matching degree between the decoding posture and the locking posture, compared with the prior art which only adopts a digital or squared mode, the locking/unlocking safety can be improved, and the user experience is further improved.
In an optional embodiment of the present disclosure, receiving an unlock picture for a target object input by a user includes:
displaying an unlocking picture input interface after receiving an unlocking trigger operation of a user for a target object;
and receiving an unlocking picture input by a user through the unlocking picture input interface.
The unlocking triggering operation refers to an action of unlocking the target object by the user, that is, an action of inputting an unlocking picture by the user, and a mode of triggering the unlocking triggering operation may be configured in advance, which is not limited in the embodiment of the present disclosure. For example, when the target object is a certain application program or a certain file installed in the terminal device, the user may regard that an unlocking trigger operation for the target object is triggered when clicking the target object; when the target object is the terminal device itself, when a preset touch action is performed on the terminal device, it can be regarded as that the unlocking trigger operation for the target object is triggered. For example, when the user double-clicks a preset area in the screen of the terminal device, it is considered that the unlocking trigger operation for the terminal device is triggered.
Correspondingly, after receiving an unlocking trigger operation of the user for the target object, an unlocking picture input interface can be displayed, and at the moment, the user can input an unlocking picture based on the displayed unlocking picture input interface.
In practical application, the mode that the user inputs the unlocking picture based on the displayed unlocking picture input interface may be configured in advance, and the embodiment of the present disclosure is not limited, for example, the unlocking picture may be input in any one of the following modes:
★, a first drawing operation of the user is received through a first graphic drawing area in the unlocking picture input interface, and a corresponding unlocking picture is generated based on the first drawing operation.
Specifically, a first graphic drawing area may be set in the unlock picture input interface, and a user may trigger a first drawing operation in the first graphic drawing area. Triggering the first drawing operation mode may refer to that the user draws in the first graphic drawing area, for example, slides a drawing line in the first graphic drawing area; further, when a drawing completion operation is received, a drawing picture is generated according to the first drawing operation, and the drawing picture is used as an unlocking picture. For example, when a drawing completion operation is received, a screen of the first graphic drawing area may be captured, and a picture obtained by the screen capture at this time may be used as an unlock picture.
In order to better draw pictures, the first graphic drawing area can be set to be a mode of a drawing tool, and the first graphic drawing area can comprise a plurality of function buttons, such as a function button for filling colors, a function button for setting line colors, and the like; correspondingly, when a user clicks a certain function button, the function corresponding to the function button can be started.
★, receiving an unlocking picture selection trigger operation of a user through a picture selection trigger area of the unlocking picture input interface, displaying the unlocking picture selection interface based on the unlocking picture selection trigger operation, and taking a picture corresponding to the unlocking picture selection operation as an unlocking picture when receiving the unlocking picture selection operation of the user through the unlocking picture selection interface.
The picture selection triggering area refers to an area which can be used for triggering the unlocking picture selection triggering operation in the unlocking picture input interface. In practical application, the situation that a user does not want to manually draw an unlocking picture by himself but directly uses a stored picture as the unlocking picture exists, and at the moment, the user can receive unlocking picture selection triggering operation of the user through a picture selection triggering area in an unlocking picture input interface; further, an unlocking picture selection interface can be displayed based on unlocking picture selection triggering operation, each picture which can be used as an unlocking picture is displayed in the selection interface, classification entries can also be displayed, and when a user selects a certain entry, each picture included in the entry is displayed; correspondingly, when the user selects one picture based on the unlocking picture selection interface, the unlocking picture selection operation of the user is considered to be received, and the picture selected by the user is taken as the unlocking picture input by the user.
The mode that the user triggers the unlocking picture selection triggering operation through the picture selection triggering area of the unlocking picture input interface can be configured in advance, and the embodiment of the disclosure is not limited. For example, a virtual button for triggering the unlocked picture selection trigger operation may be set in the picture selection trigger area of the unlocked picture input interface, and when the user clicks the virtual button, the unlocked picture selection trigger operation is considered to be triggered, and then the unlocked picture selection interface is displayed.
★, receiving an unlocking picture shooting trigger operation of a user through an image shooting trigger area of the unlocking picture input interface, calling a shooting device based on the unlocking picture shooting trigger operation, and taking a picture shot through the shooting device as an unlocking picture.
The image shooting triggering area refers to an area which can be used for triggering the unlocking picture shooting triggering operation in the unlocking picture input interface. In practical application, a user can also trigger an unlocking picture shooting trigger operation to call a shooting device by unlocking an image shooting trigger area in a picture input interface, then shoot a picture by the shooting device, and directly take the shot picture as an unlocking picture.
The mode that the user triggers the trigger operation of unlocking picture shooting through the image shooting trigger area of the unlocking picture input interface may be pre-configured, and the embodiment of the disclosure is not limited. For example, a virtual button for triggering an unlocked picture taking trigger operation may be set in an image taking trigger area in the unlocked picture input interface, when a user clicks the virtual button, it is determined that the user has triggered the unlocked picture taking trigger operation, and then the photographing device may be called and operated, at which time the user may take a picture based on the operated photographing device, and when a confirmation operation of the user for a certain taken picture is received, the picture may be taken as an unlocked picture.
In an optional embodiment of the present disclosure, before receiving an unlock picture for a target object input by a user, the method further includes:
displaying a locking picture input interface when receiving a locking picture setting operation of a user for a target object;
and receiving a locking picture input by a user through a locking picture input interface.
In practical applications, if a user performs an unlocking operation on a target object, it is indicated that a locking picture is set for the target object. The optional mode of setting a locking picture for the target object is as follows: when receiving a locking picture setting operation of a user for a target object, displaying a locking picture input interface, and at the moment, the user can input a picture based on the displayed locking picture input interface, and then taking the picture input by the user as a locking picture of the target object.
The locking picture setting operation refers to an action of setting a locking picture for a target object by a user, and a mode of triggering the locking picture setting operation may be configured in advance, which is not limited in the embodiment of the present disclosure. For example, a virtual button for triggering a locked picture setting triggering operation (the locked picture setting triggering operation refers to an action that a user wants to set a locked picture) may be set in the application interface, when the user clicks the virtual button, it is considered that the user has triggered the locked picture setting triggering operation, at this time, a target object selection interface may be displayed, an object in which the locked picture can be set is displayed in the target object selection interface, and when the user selects an object based on the target object selection interface, it is considered that the user has triggered the locked picture setting operation for the target object, at this time, the object selected by the user is the target object. Of course, in practical applications, when the target object is a certain application program or a certain file installed in the terminal device, a virtual button for triggering the setting operation of the locked picture may be displayed when the user clicks the target object, and when the user clicks the virtual button, the virtual button may be regarded as triggering the unlocking triggering operation for the target object.
Further, when receiving a locking picture setting operation of a user for a target object, a locking picture input interface may be displayed, and at this time, the user may input a locking picture through the locking picture input interface.
In practical application, the mode that the user inputs the locking picture based on the displayed locking picture input interface may be configured in advance, and the embodiment of the present disclosure is not limited, for example, the locking picture may be input in any one of the following modes:
★ receiving a second drawing operation of the user through a second graphic drawing area in the locked picture input interface, and generating a corresponding locked picture based on the second drawing operation.
Specifically, a second graphic drawing area may be set in the locked picture input interface, and the user may trigger a second drawing operation in the second graphic drawing area. Triggering the second drawing operation mode may refer to that the user draws in the second graphic drawing area; further, when the drawing completion operation is received, a drawing picture is generated according to the second drawing operation, and the drawing picture is used as a locked picture.
In order to better draw pictures, similar to the first graph drawing area, the second graph drawing area may also be set to be in a mode of a drawing tool, that is, the second graph drawing area may include a plurality of function buttons, such as a function button for filling colors, a function button for setting line colors, and the like; correspondingly, when a user clicks a certain function button, the function corresponding to the function button can be started.
★, receiving a locked picture selection trigger operation of a user through a picture selection trigger area of the locked picture input interface, displaying the locked picture selection interface based on the locked picture selection trigger operation, and taking a picture corresponding to the locked picture selection operation as a locked picture when receiving the locked picture selection operation of the user through the locked picture selection interface.
In practical application, the situation that a user does not want to manually draw a locked picture, but directly uses the stored picture as the locked picture exists, and at the moment, the user can receive the locked picture selection trigger operation of the user through a picture selection trigger area in a locked picture input interface; further, a locked picture selection interface may be displayed based on a locked picture selection trigger operation, each picture that can be used as a locked picture may be displayed in the selection interface, a category entry including the picture may also be displayed, and when a user selects a certain entry, each picture included under the entry is displayed; correspondingly, when the user selects one picture based on the locking picture selection interface, the user is considered to receive the locking picture selection operation of the user, and the picture selected by the user is taken as the locking picture input by the user.
The mode that the user triggers the locking picture selection triggering operation through the picture selection triggering area of the locking picture input interface may be preconfigured, and the embodiment of the present disclosure is not limited. For example, a virtual button for triggering the locked picture selection trigger operation may be provided in the picture selection trigger area of the locked picture input interface, and when the user clicks the virtual button, the locked picture selection trigger operation is determined to be triggered, and then the locked picture selection interface is displayed.
★, receiving a locked picture shooting trigger operation of a user through an image shooting trigger area of the locked picture input interface, calling a shooting device based on the locked picture shooting trigger operation, and locking a picture through the shooting device.
In practical application, a user can also trigger the locked picture shooting triggering operation to call the shooting device through the image shooting triggering area in the locked picture input interface, then shoot a picture through the shooting device, and directly take the shot picture as the locked picture.
The mode that the user triggers the locked picture shooting triggering operation through the image shooting triggering area of the locked picture input interface may be preconfigured, and the embodiment of the present disclosure is not limited. For example, a virtual button for triggering a locked picture shooting trigger operation may be set in an image shooting trigger area in the locked picture input interface, when a user clicks the virtual button, it is considered that the user has triggered the locked picture shooting trigger operation, and then the shooting device may be called and run, at which time the user may take a picture based on the running shooting device, and when a confirmation operation of the user for a certain shot picture is received, the picture is taken as the locked picture.
Furthermore, after the locked picture input by the user is received through the locked picture input interface, the identifier of the locked picture and the target object are stored in an associated mode, and then when the unlocking operation aiming at the target object is received, the locked picture can be determined according to the identifier stored in the associated mode with the target object.
Based on the same principle as the method shown in fig. 1, an embodiment of the present disclosure further provides a device 30 for unlocking a target object, as shown in fig. 3, the device 30 for unlocking a target object may include an unlocking information receiving module 310, a locking information obtaining module 320, a matching degree determining module 330, and an unlocking result determining module 340, where:
an unlocking picture receiving module 310, configured to receive an unlocking picture for the target object and/or an unlocking gesture for the target object, which are input by a user;
a locked picture acquiring module 320, configured to acquire a locked picture and/or a locked gesture of a target object preset by a user;
a matching degree determining module 330, configured to determine a first matching degree between the decoded picture and the locked picture, and/or a second matching degree between the decoding gesture and the locking gesture;
and the unlocking result determining module 340 is configured to determine that the target object passes the unlocking process when it is determined that the first matching degree and/or the second matching degree meet the preset condition.
In an optional embodiment of the present disclosure, when receiving an unlock picture for a target object input by a user, the unlock picture receiving module is specifically configured to:
displaying an unlocking picture input interface after receiving an unlocking trigger operation of a user for a target object;
and receiving an unlocking picture input by a user through the unlocking picture input interface.
In an optional embodiment of the present disclosure, when the unlock picture receiving module receives an unlock picture for the target object, the unlock picture receiving module includes any one of:
receiving a first drawing operation of a user through a first graphic drawing area in an unlocking picture input interface, and generating an unlocking picture based on the first drawing operation;
receiving an unlocking picture selection trigger operation of a user through a picture selection trigger area of an unlocking picture input interface; displaying an unlocking picture selection interface based on an unlocking picture selection trigger operation; when receiving an unlocking picture selection operation of a user through an unlocking picture selection interface, taking a picture corresponding to the unlocking picture selection operation as an unlocking picture;
receiving an unlocking picture shooting trigger operation of a user through an image shooting trigger area of an unlocking picture input interface, calling a shooting device based on the unlocking picture shooting trigger operation, and taking a picture shot by the shooting device as an unlocking picture.
In an optional embodiment of the present disclosure, the first matching degree includes a matching degree of an object between an object in the unlocked picture and an object at a corresponding position in the locked picture, and the preset condition includes that the matching degree of the object is greater than a first threshold.
In an optional embodiment of the present disclosure, the first matching degree further includes a position matching degree of an object in the unlocked picture and an object in a corresponding position in the locked picture in the unlocked picture, and the preset condition further includes that the position matching degree is greater than a second threshold.
In an optional embodiment of the present disclosure, the apparatus further includes a locked picture setting module, configured to:
before receiving an unlocking picture which is input by a user and aims at a target object, if receiving a locking picture setting operation of the user aiming at the target object, displaying a locking picture input interface;
and receiving a locking picture input by a user through a locking picture input interface.
In an optional embodiment of the present disclosure, when receiving a locked picture input by a user through a locked picture input interface, the locked picture setting module includes any one of the following:
receiving a second drawing operation of the user through a second graphic drawing area in the locked picture input interface, and generating a locked picture based on the second drawing operation;
receiving a locked picture selection trigger operation of a user through a picture selection trigger area of a locked picture input interface; displaying a locking picture selection interface based on a locking picture selection trigger operation; when a locking picture selection operation of a user is received through a locking picture selection interface, taking a picture corresponding to the locking picture selection operation as a locking picture;
and receiving a locked picture shooting trigger operation of a user through an image shooting trigger area of the locked picture input interface, calling a shooting device based on the locked picture shooting trigger operation, and locking the picture through the shooting device.
In an alternative embodiment of the present disclosure, the apparatus is included in a terminal device, and the unlocking posture and the locking posture are characterized by a rotation angle of the terminal device from a set direction.
The device for unlocking a target object according to the embodiment of the present disclosure may execute the method for unlocking a target object provided by the embodiment of the present disclosure, and the implementation principles thereof are similar, the actions performed by each module in the device for unlocking a target object according to the embodiments of the present disclosure correspond to the steps in the method for unlocking a target object according to the embodiments of the present disclosure, and for the detailed function description of each module of the device for unlocking a target object, reference may be specifically made to the description in the corresponding method for unlocking a target object shown in the foregoing, and no further description is given here.
Based on the same principle as the method shown in the embodiments of the present disclosure, embodiments of the present disclosure also provide an electronic device, which may include but is not limited to: a processor and a memory; a memory for storing computer operating instructions; and the processor is used for executing the method shown in the embodiment by calling the computer operation instruction.
Based on the same principle as the method shown in the embodiment of the present disclosure, an embodiment of the present disclosure further provides a computer-readable storage medium, where at least one instruction, at least one program, a code set, or an instruction set is stored in the computer-readable storage medium, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by a processor to implement the method shown in the embodiment, which is not described herein again.
Referring now to FIG. 4, a block diagram of an electronic device 600 suitable for use in implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
The electronic device includes: a memory and a processor, wherein the processor may be referred to as the processing device 601 hereinafter, and the memory may include at least one of a Read Only Memory (ROM)602, a Random Access Memory (RAM)603 and a storage device 608 hereinafter, which are specifically shown as follows:
as shown in fig. 4, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
In general, input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc., output devices 607 including, for example, a liquid crystal display (L CD), speaker, vibrator, etc., storage devices 608 including, for example, magnetic tape, hard disk, etc., and communication devices 609.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). examples of communications networks include local area networks ("L AN"), wide area networks ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
Computer program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including but not limited to AN object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules or units described in the embodiments of the present disclosure may be implemented by software or hardware. Wherein the designation of a module or unit does not in some cases constitute a limitation of the unit itself.
For example, without limitation, exemplary types of hardware logic that may be used include Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex programmable logic devices (CP L D), and so forth.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, [ example a1 ] there is provided a method of target object unlocking, comprising:
receiving an unlocking picture for a target object and/or an unlocking gesture for the target object, which are input by a user;
acquiring a locking picture and/or a locking gesture of a target object preset by a user;
determining a first matching degree of the decoding picture and the locking picture and/or a second matching degree of the decoding gesture and the locking gesture;
and when the first matching degree and/or the second matching degree meet/meets the preset conditions, determining that the target object passes the unlocking.
A2, according to the method of A1, receiving an unlock picture for a target object input by a user, including:
displaying an unlocking picture input interface after receiving an unlocking trigger operation of a user for a target object;
and receiving an unlocking picture input by a user through the unlocking picture input interface.
A3, according to the method of A2, receiving an unlocking picture for a target object through an unlocking picture input interface, wherein the unlocking picture comprises any one of the following items:
receiving a first drawing operation of a user through a first graphic drawing area in an unlocking picture input interface, and generating an unlocking picture based on the first drawing operation;
receiving an unlocking picture selection trigger operation of a user through a picture selection trigger area of an unlocking picture input interface; displaying an unlocking picture selection interface based on an unlocking picture selection trigger operation; when receiving an unlocking picture selection operation of a user through an unlocking picture selection interface, taking a picture corresponding to the unlocking picture selection operation as an unlocking picture;
receiving an unlocking picture shooting trigger operation of a user through an image shooting trigger area of an unlocking picture input interface, calling a shooting device based on the unlocking picture shooting trigger operation, and taking a picture shot by the shooting device as an unlocking picture.
A4, according to the method of a1, the first matching degree includes an object matching degree between the object in the unlocked picture and the object at the corresponding position in the locked picture, and the preset condition includes that the object matching degree is greater than a first threshold.
A5, according to the method of A4, the first matching degree further includes matching degrees of positions of the objects in the unlocked picture and positions of the objects at corresponding positions in the locked picture in the unlocked picture, and the preset condition further includes that the matching degrees of the positions are greater than a second threshold.
A6, the method according to any one of a1-a5, further comprising, before receiving a user-input unlock picture for a target object:
displaying a locking picture input interface when receiving a locking picture setting operation of a user for a target object;
and receiving a locking picture input by a user through a locking picture input interface.
A7, according to the method of A6, receiving a locked picture input by a user through a locked picture input interface, wherein the locked picture input by the user comprises any one of the following items:
receiving a second drawing operation of the user through a second graphic drawing area in the locked picture input interface, and generating a locked picture based on the second drawing operation;
receiving a locked picture selection trigger operation of a user through a picture selection trigger area of a locked picture input interface; displaying a locking picture selection interface based on a locking picture selection trigger operation; when a locking picture selection operation of a user is received through a locking picture selection interface, taking a picture corresponding to the locking picture selection operation as a locking picture;
and receiving a locked picture shooting trigger operation of a user through an image shooting trigger area of the locked picture input interface, calling a shooting device based on the locked picture shooting trigger operation, and locking the picture through the shooting device.
A8, method according to a1, the method being performed by a terminal device, the unlocking posture and the locking posture being characterized by a rotation angle of the terminal device from a set direction.
According to one or more embodiments of the present disclosure, [ example B1 ] there is provided an apparatus for target object unlocking, comprising:
the unlocking information receiving device is used for receiving an unlocking picture aiming at the target object and/or an unlocking gesture aiming at the target object, which are input by a user;
the locking information acquisition device is used for acquiring a locking picture and/or a locking gesture of a target object preset by a user;
matching degree determining device, which is used for determining a first matching degree of the decoding picture and the locking picture and/or a second matching degree of the decoding gesture and the locking gesture;
and the unlocking result determining device is used for determining that the target object passes through the unlocking when the first matching degree and/or the second matching degree meet the preset condition.
B2, according to the apparatus of B1, when receiving the unlock picture for the target object inputted by the user, the unlock picture receiving module is specifically configured to:
displaying an unlocking picture input interface after receiving an unlocking trigger operation of a user for a target object;
and receiving an unlocking picture input by a user through the unlocking picture input interface.
B3, the apparatus according to B2, the unlock picture receiving module, when receiving an unlock picture for the target object inputted by a user, includes any one of:
receiving a first drawing operation of a user through a first graphic drawing area in an unlocking picture input interface, and generating an unlocking picture based on the first drawing operation;
receiving an unlocking picture selection trigger operation of a user through a picture selection trigger area of an unlocking picture input interface; displaying an unlocking picture selection interface based on an unlocking picture selection trigger operation; when receiving an unlocking picture selection operation of a user through an unlocking picture selection interface, taking a picture corresponding to the unlocking picture selection operation as an unlocking picture;
receiving an unlocking picture shooting trigger operation of a user through an image shooting trigger area of an unlocking picture input interface, calling a shooting device based on the unlocking picture shooting trigger operation, and taking a picture shot by the shooting device as an unlocking picture.
B4, the apparatus according to B1, the matching degree includes an object matching degree between the object in the unlocked picture and the object at the corresponding position in the locked picture, and the preset condition includes that the object matching degree is greater than a first threshold.
And B5, according to the device of B4, the matching degree further includes the matching degree of the position of the object in the unlocked picture and the position of the object at the corresponding position in the locked picture in the unlocked picture, and the preset condition further includes that the matching degree of the positions is greater than a second threshold.
B6, the device according to any of B1-B5, further comprising a locked picture setting module for:
before receiving an unlocking picture which is input by a user and aims at a target object, if receiving a locking picture setting operation of the user aiming at the target object, displaying a locking picture input interface;
and receiving a locking picture input by a user through a locking picture input interface.
B7, according to the apparatus of B6, the locked picture setting module includes any one of the following when receiving a locked picture input by a user through the locked picture input interface:
receiving a second drawing operation of the user through a second graphic drawing area in the locked picture input interface, and generating a locked picture based on the second drawing operation;
receiving a locked picture selection trigger operation of a user through a picture selection trigger area of a locked picture input interface; displaying a locking picture selection interface based on a locking picture selection trigger operation; when a locking picture selection operation of a user is received through a locking picture selection interface, taking a picture corresponding to the locking picture selection operation as a locking picture;
and receiving a locked picture shooting trigger operation of a user through an image shooting trigger area of the locked picture input interface, calling a shooting device based on the locked picture shooting trigger operation, and locking the picture through the shooting device.
B8, the apparatus according to B1, the apparatus being comprised in a terminal device, the unlocking posture and the locking posture being characterized by a rotation angle of the terminal device from a set direction.
According to one or more embodiments of the present disclosure, [ example C1 ] there is provided an electronic device, comprising:
a processor and a memory;
a memory for storing computer operating instructions;
a processor for executing the method of any one of A1-A8 by calling computer operation instructions.
According to one or more embodiments of the present disclosure, [ example D1 ] there is provided a computer readable medium, characterized in that the readable medium stores at least one instruction, at least one program, code set, or set of instructions, which is loaded and executed by a processor to implement the method of any one of a1 to a 8.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (11)

1. A method of unlocking a target object, comprising:
receiving an unlocking picture for a target object and/or an unlocking gesture for the target object, which are input by a user;
acquiring a locking picture and/or a locking gesture of the target object preset by a user;
determining a first matching degree of the decoded picture and the locked picture and/or a second matching degree of the decoding gesture and the locking gesture;
and when the first matching degree and/or the second matching degree are determined to meet the preset condition, determining that the target object is unlocked to pass.
2. The method of claim 1, wherein receiving a user-input unlock picture for a target object comprises:
displaying an unlocking picture input interface after receiving an unlocking trigger operation of a user for a target object;
and receiving an unlocking picture input by a user through the unlocking picture input interface.
3. The method according to claim 2, wherein the receiving, through the unlock picture input interface, the user-input unlock picture for the target object includes any one of:
receiving a first drawing operation of the user through a first graphic drawing area in the unlocking picture input interface, and generating the unlocking picture based on the first drawing operation;
receiving an unlocking picture selection trigger operation of a user through a picture selection trigger area of the unlocking picture input interface; displaying an unlocking picture selection interface based on the unlocking picture selection trigger operation; when an unlocking picture selection operation of a user is received through the unlocking picture selection interface, taking a picture corresponding to the unlocking picture selection operation as the unlocking picture;
receiving an unlocking picture shooting trigger operation of a user through an image shooting trigger area of the unlocking picture input interface, calling a shooting device based on the unlocking picture shooting trigger operation, and taking a picture shot through the shooting device as the unlocking picture.
4. The method according to claim 1, wherein the first matching degree comprises a matching degree of objects between an object in the unlocked picture and an object in a corresponding position in the locked picture, and the preset condition comprises that the matching degree of objects is greater than a first threshold.
5. The method according to claim 4, wherein the first matching degree further includes a matching degree of positions of the object in the unlocked picture and the object in the corresponding position in the locked picture in the unlocked picture, and the preset condition further includes that the matching degree of positions is greater than a second threshold.
6. The method according to any one of claims 1-5, wherein before receiving the user-input unlock picture for the target object, further comprising:
displaying a locked picture input interface when receiving a locked picture setting operation of the user for the target object;
and receiving the locking picture input by the user through the locking picture input interface.
7. The method according to claim 6, wherein the receiving the user-input locked picture through the locked picture input interface includes any one of:
receiving a second drawing operation of the user through a second graphic drawing area in the locked picture input interface, and generating the locked picture based on the second drawing operation;
receiving a locked picture selection trigger operation of a user through a picture selection trigger area of the locked picture input interface; displaying a locked picture selection interface based on the locked picture selection trigger operation; when a locking picture selection operation of a user is received through the locking picture selection interface, taking a picture corresponding to the locking picture selection operation as the locking picture;
and receiving a locked picture shooting trigger operation of a user through an image shooting trigger area of the locked picture input interface, calling a shooting device based on the locked picture shooting trigger operation, and locking a picture through the shooting device.
8. The method according to claim 1, characterized in that the method is performed by a terminal device, and the unlocking gesture and the locking gesture are characterized by a rotation angle of the terminal device from a set direction.
9. An apparatus for unlocking a target object, comprising:
the unlocking information receiving device is used for receiving an unlocking picture aiming at a target object and/or an unlocking gesture aiming at the target object, which are input by a user;
the locking information acquisition device is used for acquiring a locking picture and/or a locking gesture of the target object, which are preset by a user;
matching degree determination means for determining a first matching degree of the decoded picture and the locked picture, and/or a second matching degree of the decoding gesture and the locked gesture;
and the unlocking result determining device is used for determining that the target object passes the unlocking when the first matching degree and/or the second matching degree are determined to meet the preset condition.
10. An electronic device, comprising:
a processor and a memory;
the memory is used for storing computer operation instructions;
the processor is used for executing the method of any one of claims 1 to 8 by calling the computer operation instruction.
11. A computer readable medium storing at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the method of any one of claims 1 to 8.
CN202010193363.7A 2020-03-18 2020-03-18 Target object unlocking method and device, electronic equipment and readable medium Pending CN111400693A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010193363.7A CN111400693A (en) 2020-03-18 2020-03-18 Target object unlocking method and device, electronic equipment and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010193363.7A CN111400693A (en) 2020-03-18 2020-03-18 Target object unlocking method and device, electronic equipment and readable medium

Publications (1)

Publication Number Publication Date
CN111400693A true CN111400693A (en) 2020-07-10

Family

ID=71436597

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010193363.7A Pending CN111400693A (en) 2020-03-18 2020-03-18 Target object unlocking method and device, electronic equipment and readable medium

Country Status (1)

Country Link
CN (1) CN111400693A (en)

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102368200A (en) * 2011-10-28 2012-03-07 青岛海信移动通信技术股份有限公司 Touch screen unlocking method and electronic product with touch screen
CN102736853A (en) * 2012-05-17 2012-10-17 北京三星通信技术研究有限公司 Screen unlocking method, screen locking method and terminal
CN102880489A (en) * 2012-09-13 2013-01-16 百度在线网络技术(北京)有限公司 Method and device for starting application program of mobile terminal as well as mobile terminal
CN102929515A (en) * 2012-10-29 2013-02-13 广东欧珀移动通信有限公司 Mobile terminal unlocking method and mobile terminal
CN103106034A (en) * 2013-02-05 2013-05-15 中标软件有限公司 Unlocking method and unlocking system for electronic device and electronic device screen or electronic device application
CN103116465A (en) * 2013-02-06 2013-05-22 中标软件有限公司 Screen of electronic equipment or applied unlocking method and system
CN103167143A (en) * 2012-09-20 2013-06-19 深圳市金立通信设备有限公司 Gravity ball unlocking system and method of mobile phone
US20130198837A1 (en) * 2012-02-01 2013-08-01 University Of Seoul Industry Cooperation Foundation Unlocking schemes
CN103927106A (en) * 2013-01-14 2014-07-16 富泰华工业(深圳)有限公司 Application program starting system and method
CN104536642A (en) * 2014-12-09 2015-04-22 小米科技有限责任公司 Unlocking method and device
CN104573444A (en) * 2015-01-20 2015-04-29 广东欧珀移动通信有限公司 Terminal unlocking method and device
CN105224840A (en) * 2015-10-14 2016-01-06 上海斐讯数据通信技术有限公司 A kind of unlock method of mobile terminal, system for unlocking and mobile terminal
CN106096377A (en) * 2016-06-21 2016-11-09 北京奇虎科技有限公司 Application unlocking method, device and the mobile terminal of a kind of mobile terminal
CN106469002A (en) * 2015-08-17 2017-03-01 阿里巴巴集团控股有限公司 A kind of method and apparatus for unblock
CN106488034A (en) * 2016-11-24 2017-03-08 努比亚技术有限公司 A kind of method realizing unlocking and mobile terminal
CN106909812A (en) * 2015-12-23 2017-06-30 北京奇虎科技有限公司 Terminal unlocking processing method and terminal
CN107015732A (en) * 2017-04-28 2017-08-04 维沃移动通信有限公司 A kind of interface display method and mobile terminal
CN107346387A (en) * 2017-06-23 2017-11-14 深圳传音通讯有限公司 Unlocking method and device
CN107368730A (en) * 2017-07-31 2017-11-21 广东欧珀移动通信有限公司 Unlock verification method and device
CN109409071A (en) * 2018-11-13 2019-03-01 湖北文理学院 Unlocking method, device and the electronic equipment of electronic equipment

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102368200A (en) * 2011-10-28 2012-03-07 青岛海信移动通信技术股份有限公司 Touch screen unlocking method and electronic product with touch screen
US20130198837A1 (en) * 2012-02-01 2013-08-01 University Of Seoul Industry Cooperation Foundation Unlocking schemes
CN102736853A (en) * 2012-05-17 2012-10-17 北京三星通信技术研究有限公司 Screen unlocking method, screen locking method and terminal
CN102880489A (en) * 2012-09-13 2013-01-16 百度在线网络技术(北京)有限公司 Method and device for starting application program of mobile terminal as well as mobile terminal
CN103167143A (en) * 2012-09-20 2013-06-19 深圳市金立通信设备有限公司 Gravity ball unlocking system and method of mobile phone
CN102929515A (en) * 2012-10-29 2013-02-13 广东欧珀移动通信有限公司 Mobile terminal unlocking method and mobile terminal
CN103927106A (en) * 2013-01-14 2014-07-16 富泰华工业(深圳)有限公司 Application program starting system and method
CN103106034A (en) * 2013-02-05 2013-05-15 中标软件有限公司 Unlocking method and unlocking system for electronic device and electronic device screen or electronic device application
CN103116465A (en) * 2013-02-06 2013-05-22 中标软件有限公司 Screen of electronic equipment or applied unlocking method and system
CN104536642A (en) * 2014-12-09 2015-04-22 小米科技有限责任公司 Unlocking method and device
CN104573444A (en) * 2015-01-20 2015-04-29 广东欧珀移动通信有限公司 Terminal unlocking method and device
CN106469002A (en) * 2015-08-17 2017-03-01 阿里巴巴集团控股有限公司 A kind of method and apparatus for unblock
CN105224840A (en) * 2015-10-14 2016-01-06 上海斐讯数据通信技术有限公司 A kind of unlock method of mobile terminal, system for unlocking and mobile terminal
CN106909812A (en) * 2015-12-23 2017-06-30 北京奇虎科技有限公司 Terminal unlocking processing method and terminal
CN106096377A (en) * 2016-06-21 2016-11-09 北京奇虎科技有限公司 Application unlocking method, device and the mobile terminal of a kind of mobile terminal
CN106488034A (en) * 2016-11-24 2017-03-08 努比亚技术有限公司 A kind of method realizing unlocking and mobile terminal
CN107015732A (en) * 2017-04-28 2017-08-04 维沃移动通信有限公司 A kind of interface display method and mobile terminal
CN107346387A (en) * 2017-06-23 2017-11-14 深圳传音通讯有限公司 Unlocking method and device
CN107368730A (en) * 2017-07-31 2017-11-21 广东欧珀移动通信有限公司 Unlock verification method and device
CN109409071A (en) * 2018-11-13 2019-03-01 湖北文理学院 Unlocking method, device and the electronic equipment of electronic equipment

Similar Documents

Publication Publication Date Title
RU2643473C2 (en) Method and tools for fingerprinting identification
CN106778141B (en) Unlocking method and device based on gesture recognition and mobile terminal
CN111783756B (en) Text recognition method and device, electronic equipment and storage medium
US10554803B2 (en) Method and apparatus for generating unlocking interface, and electronic device
CN111786876B (en) Information processing method, device, electronic equipment and computer readable medium
CN112804445B (en) Display method and device and electronic equipment
CN105446636B (en) Dynamic unlocking method and electronic device
CN112416200A (en) Display method, display device, electronic equipment and readable storage medium
US20140232748A1 (en) Device, method and computer readable recording medium for operating the same
CN111935111B (en) Interaction method and device and electronic equipment
CN107045604A (en) Information processing method and device
CN111445415A (en) Image restoration method and device, electronic equipment and storage medium
CN113807253A (en) Face recognition method and device, electronic equipment and storage medium
CN107066864B (en) Application icon display method and device
CN111897474A (en) File processing method and electronic equipment
CN112181559A (en) Interface display method and device and electronic equipment
CN111400693A (en) Target object unlocking method and device, electronic equipment and readable medium
CN116596748A (en) Image stylization processing method, apparatus, device, storage medium, and program product
CN107341482B (en) Fingerprint identification method and device and computer readable storage medium
CN115082368A (en) Image processing method, device, equipment and storage medium
CN114549983A (en) Computer vision model training method and device, electronic equipment and storage medium
CN114648649A (en) Face matching method and device, electronic equipment and storage medium
CN114283476A (en) Unlocking control method and device, electronic equipment and readable storage medium
CN112765620A (en) Display control method, display control device, electronic device, and medium
CN112261216A (en) Terminal control method and device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230428

Address after: Room 802, Information Building, 13 Linyin North Street, Pinggu District, Beijing, 101299

Applicant after: Beijing youzhuju Network Technology Co.,Ltd.

Address before: No. 715, 7th floor, building 3, 52 Zhongguancun South Street, Haidian District, Beijing 100081

Applicant before: Beijing infinite light field technology Co.,Ltd.

TA01 Transfer of patent application right