CN111723843B - Sign-in method, sign-in device, electronic equipment and storage medium - Google Patents

Sign-in method, sign-in device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111723843B
CN111723843B CN202010414646.XA CN202010414646A CN111723843B CN 111723843 B CN111723843 B CN 111723843B CN 202010414646 A CN202010414646 A CN 202010414646A CN 111723843 B CN111723843 B CN 111723843B
Authority
CN
China
Prior art keywords
picture
sign
check
task
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010414646.XA
Other languages
Chinese (zh)
Other versions
CN111723843A (en
Inventor
彭飞
刘文军
邓竹立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuba Co Ltd
Original Assignee
Wuba Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuba Co Ltd filed Critical Wuba Co Ltd
Priority to CN202010414646.XA priority Critical patent/CN111723843B/en
Publication of CN111723843A publication Critical patent/CN111723843A/en
Application granted granted Critical
Publication of CN111723843B publication Critical patent/CN111723843B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C1/00Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
    • G07C1/10Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people together with the recording, indicating or registering of other data, e.g. of signs of identity
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a sign-in method, a sign-in device, electronic equipment and a storage medium, and relates to the technical field of Internet. The method comprises the following steps: receiving a sign-in instruction aiming at a sign-in task, and acquiring image data obtained through AR equipment scanning and current real-time positioning information; if the image data contains a picture matched with the target picture, and the error of the real-time positioning information and the target position information is in a preset error range, confirming that the check-in for the check-in task is successful; the target picture is a verification picture preset for the check-in task, and the target position information is position information preset for the check-in task. Based on the check-in based on the geographic positioning information, the check-in error caused by the drift error of the positioning information is solved by using an AR technology, so that a user can finish the check-in only by reaching an accurate position, and the accuracy of the check-in result is improved.

Description

Sign-in method, sign-in device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of internet technologies, and in particular, to a sign-in method, a sign-in device, an electronic device, and a storage medium.
Background
In the field of O2O (Online To Offline, online offline/online to offline), how to use technical means to combine online and offline is an important key point for promoting informatization of offline entity industry and improving efficiency of offline entity industry by using mobile technology. In the current technical field, a common technical means is that an App (Application program) end displays a sign-in function, and instructs a user to go to an off-line physical store to complete a sign-in task on the App. Because the App can verify the geographical positioning information of the current application of the user, the user can be prevented from being counterfeited, and the user can complete the activities only by going to a real physical store. The sign-in activity of the type can guide on-line App passenger flow to off-line physical shops, reduces the acquisition cost of the off-line physical shops in a traditional mode (e.g. transmitting a leaflet, boarding a newspaper and advertising on a television), and greatly improves the exposure rate and sales of the shops.
In the related art, an important means of check-in verification is to verify whether the current geographical positioning information (e.g., latitude and longitude) of the user App is consistent with expectations. However, due to drift and errors in positioning information, check-in activities can be completed in many cases even if the user does not enter a physical store. Particularly, in the case that a plurality of merchants are adjacent, the check-in result accuracy is not high because of the difficulty in distinguishing, and the effect of on-line and off-line connection is difficult to achieve.
Disclosure of Invention
The embodiment of the invention provides a check-in method, a check-in device, electronic equipment and a storage medium, which are used for solving the problem that the existing check-in result is low in accuracy and the on-line and off-line connection effect is difficult to achieve the expected effect.
In order to solve the technical problems, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a sign-in method, including:
receiving a sign-in instruction aiming at a sign-in task, and acquiring image data obtained through AR equipment scanning and current real-time positioning information;
if the image data contains a picture matched with the target picture, and the error of the real-time positioning information and the target position information is in a preset error range, confirming that the check-in for the check-in task is successful;
the target picture is a verification picture preset for the check-in task, and the target position information is position information preset for the check-in task.
Optionally, the step of receiving a check-in instruction for any check-in task and acquiring image data obtained by scanning by the AR device includes:
receiving a check-in instruction aiming at the check-in task, acquiring check-in characteristic information corresponding to the check-in task, and displaying the check-in characteristic information to a user so as to prompt the user to control the scanning range of the AR equipment;
Acquiring image data obtained by scanning the AR equipment;
the sign-in characteristic information is obtained according to the target picture.
Optionally, the sign-in feature information does not completely include the target picture.
Optionally, the target picture includes a first preset pattern for the real world and a second preset pattern for the virtual object, and before the step of confirming that the check-in for the check-in task is successful, the method further includes:
extracting a first pattern area for the real world and a second pattern area for a virtual object contained in the picture for any picture in the image data;
and if the first pattern area is matched with the first preset pattern and the second pattern area is matched with the second preset pattern, confirming that a picture matched with a target picture exists in the image data.
Optionally, before the step of confirming that the check-in for the check-in task is successful, the method further comprises:
for any picture in the image data, acquiring a third pattern area for the real world contained in the picture and a virtual object added by the AR equipment when the picture is obtained by scanning;
If the third pattern area is matched with the target picture and the AR equipment adds a virtual object consistent with a preset virtual object, confirming that a picture matched with the target picture exists in the image data;
the preset virtual object is a virtual object preset for the sign-in task.
In a second aspect, an embodiment of the present invention further provides a sign-in apparatus, including:
the data acquisition module is used for receiving a sign-in instruction aiming at a sign-in task and acquiring image data obtained through AR equipment scanning and current real-time positioning information;
the sign-in confirmation module is used for confirming that sign-in for the sign-in task is successful if the image data contains a picture matched with the target picture and the error of the real-time positioning information and the target position information is in a preset error range;
the target picture is a verification picture preset for the check-in task, and the target position information is position information preset for the check-in task.
Optionally, the data acquisition module includes:
the sign-in feature prompting sub-module is used for receiving a sign-in instruction for the sign-in task, acquiring sign-in feature information corresponding to the sign-in task, and displaying the sign-in feature information to a user so as to prompt the user to control the scanning range of the AR equipment;
An image data acquisition sub-module, configured to acquire image data obtained by scanning the AR device;
the sign-in characteristic information is obtained according to the target picture.
Optionally, the sign-in feature information does not completely include the target picture.
Optionally, the apparatus further comprises:
a first image data processing module, configured to extract, for any picture in the image data, a first pattern area for the real world and a second pattern area for the virtual object, which are included in the picture;
and the first picture verification confirming module is used for confirming that a picture matched with a target picture exists in the image data if the first pattern area is matched with the first preset pattern and the second pattern area is matched with the second preset pattern.
Optionally, the apparatus further comprises:
the second image data processing module is used for acquiring a third pattern area for the real world contained in any picture in the image data, and a virtual object added by the AR equipment when the picture is obtained by scanning;
a second picture verification confirming module, configured to confirm that a picture matching the target picture exists in the image data if the third pattern area matches the target picture and the AR device adds a virtual object consistent with a preset virtual object;
The preset virtual object is a virtual object preset for the sign-in task.
In a third aspect, an embodiment of the present invention further provides an electronic device, including: a memory, a processor and a computer program stored on the memory and executable on the processor, which when executed by the processor performs the steps of any one of the check-in methods as described in the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer readable storage medium, where a computer program is stored on the computer readable storage medium, where the computer program when executed by a processor implements the steps of any one of the check-in methods according to the first aspect.
In the embodiment of the invention, the sign-in error caused by the drift error of the positioning information is solved by utilizing the AR technology on the basis of sign-in based on the geographic positioning information, so that a user can finish sign-in only by reaching an accurate position, and the accuracy of a sign-in result is improved.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of steps of a check-in method in an embodiment of the invention;
FIG. 2 is a flow chart of steps of another check-in method in an embodiment of the invention;
FIG. 3 is a schematic diagram of a check-in device according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of another check-in device according to an embodiment of the present invention;
fig. 5 is a schematic hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a flowchart illustrating steps of a check-in method according to an embodiment of the present invention is shown.
Step 110, receiving a check-in instruction for a check-in task, and acquiring image data obtained by scanning the AR device and current real-time positioning information.
Step 120, if there is a picture matching with the target picture in the image data, and the error of the real-time positioning information and the target position information is within a preset error range, confirming that the check-in for the check-in task is successful; the target picture is a verification picture preset for the check-in task, and the target position information is position information preset for the check-in task.
As described above, in the technical field of O2O and the like, a common technical means is to instruct a user to go to an area such as an off-line physical store to complete a check-in task by issuing the check-in task. The task sign-in is a common functional form on an application program (hereinafter referred to as App) in an O2O scenario, takes a sign-in function on the App as a carrier, and combines a task in an entity scenario as a trigger condition. For example, an online sign-in task shows a user that a store front of a certain merchant needs to be checked in with an App, and the App limits a sign-in activity by verifying geographic positioning information of the store front, so that the user must go to a real merchant store front to perform App operation to complete a specified task, and offline and online combination is realized. The current geographic positioning information of the user can be checked through the mobile phone, the computer and other electronic equipment carried by the user, so that the user can be prevented from being counterfeited, and the user can complete task sign-in only by going to a real entity store for sign-in. The sign-in activity of the type can lead on-line users to off-line physical shops, reduce the acquisition cost of the off-line physical shops in a traditional mode (e.g. transmitting a leaflet, boarding a newspaper and advertising on a television), and improve the exposure rate and sales of the shops.
However, an important way of the task sign-in approach is to verify whether the current geolocation information (e.g., latitude and longitude) of the user is consistent with expectations. However, due to drift and errors in positioning information, check-in activities can be accomplished in many cases even if the user does not enter a physical store. Particularly in the case of adjacent merchants, the accuracy of the check-in result is poor due to the difficulty in distinguishing, and the effect of on-line and off-line connection is difficult to achieve.
In addition, AR (augmented reality) is a technology for fusing virtual information and a real world, and various technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like are widely applied, and after virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer is simulated and applied to the real world, the two kinds of information are mutually complemented, so that the "enhancement" of the real world is realized. Each large mobile development platform (e.g., apple and google) has respectively pushed out its own AR development platform so that the developer can utilize AR technology at low cost. In particular, it is possible to recognize a picture in the real world under an AR camera. This allows iOS developers to easily recognize real-world pictures using AR technology when developing iOS applications. Moreover, the function of identifying the real world pictures under the AR is also supported for the Android development platform.
Therefore, in the embodiment of the invention, in order to improve the accuracy of the task sign-in result on the basis of not increasing excessive cost, the task sign-in can be performed by combining the real-time positioning information and the AR technology. Specifically, in the case of receiving a check-in instruction for a check-in task, image data scanned by the AR device and current real-time positioning information may be acquired.
Moreover, in the embodiments of the present invention, the real-time positioning information may be obtained in any available manner, which is not limited to the embodiments of the present invention. For example, if the user checks in through an electronic device such as a mobile phone, the real-time location information of the electronic device used for checking in can be obtained as the real-time location information; etc. Moreover, the AR device may include any device having AR functions, such as an AR camera, AR glasses, and the like.
In addition, in the embodiment of the present invention, in order to facilitate task sign-in, an electronic device for sign-in may be set as a device having a positioning function and an AR function, and image data obtained by scanning an AR device attached to the electronic device in time, and current real-time positioning information may be obtained by the positioning function in the electronic device.
For example, assuming that a sign-in task is issued and signed in through a certain App in an electronic device with an AR function, when a user finds a scene corresponding to the sign-in a merchant store, the corresponding App may be opened, and a sign-in button in the App is clicked, so that the App may start the AR function.
The AR function in the embodiment of the invention is developed by adopting any available mode such as AR development technology provided by a general mobile development platform, and is a general basic technology. For example, in the iOS development platform, the corresponding AR function can be easily developed by using the ARKit provided by the apple; similarly, the corresponding AR functions can also be developed using AR development techniques provided by google anaroid platform.
Further, the AR device may perform data acquisition and analysis on the scannable image data, and perform comparison and analysis according to the target picture available for AR verification, to see whether the image data acquired based on the real scene corresponds to the target picture. If the image data is detected to have the picture matched with the target picture, and the error of the real-time positioning information and the target position information is in a preset error range, confirming that the check-in for the check-in task is successful; the target picture is a verification picture preset for the check-in task, and the target position information is position information preset for the check-in task. If the AR device scans to obtain that the image data contains the picture which can correspond to the target photo, the AR device can stop continuing scanning and perform subsequent operation. If the image data obtained by current scanning of the AR device does not have any picture which can correspond to the target photo, the scanning range can be adjusted along with the continuous movement of the position of the AR device by the user to search continuously until the image which is matched with the target photo is obtained by scanning or the user exits from the sign-in task.
In addition, since the AR technology can identify a picture in the real world contained in the image data obtained by scanning the AR device, the real world picture can be identified by the AR technology to detect whether it matches the target picture. That is, the AR device can complete the scanning of the image data and the detection of whether there is a picture matching the target picture in the image data, without increasing excessive cost while improving the verification efficiency.
The picture matching with the target picture may be understood as a picture having a similarity with the target picture reaching a preset similarity, and/or the picture includes all contents in the target picture, and so on. Specific matching conditions can be set in a self-defined manner according to requirements, and the embodiment of the invention is not limited. Moreover, the target picture and the target position information can be set in a self-defined manner according to the requirements, and the embodiment of the invention is not limited.
For example, assuming that the sign-in task is to a specific storefront, then the target picture may be a specific picture of the specific storefront under the AR device, and the target location information may be geographic location information of the corresponding specific storefront.
The preset error range can also be set in a self-defined manner according to requirements, for example, the preset error range can be set to take the target position information as the origin, and positioning information within the range of 200 meters is considered to be satisfactory.
In the embodiment of the invention, in a check-in scene aiming at a merchant, a background system can be arranged for an App for issuing a check-in task, wherein information of each merchant of the cooperation is stored, including but not limited to a unique identification (merchantId) of the merchant, geographic positioning information (longitude, latitude) of the merchant and a photo (merchantPictures) for AR verification in a store of the merchant. The geographic positioning information of the merchant can be understood as the target position information, and a photo for AR verification in the merchant store can be understood as the target picture. The decoration style of each merchant is obviously different in general, so that the acquired photos of the merchant store, which can be identified by AR, are unique. When the platform system can also perform image recognition on the acquired pictures, whether repeated pictures exist or not is identified. The image acquisition means is not limited in the embodiment of the invention, and can be uploaded to a background system after photographing by a merchant, or an App service providing company can send professional personnel to photograph and acquire, and the like.
When signing in, the information of the merchant corresponding to the corresponding signing in task can be requested to be obtained from the background system of the corresponding App according to the signing in task, including but not limited to a unique identifier of the merchant, geographical positioning information of the merchant, and a photo for AR verification in the merchant store. For use in authentication of subsequent links and further interaction with the backend system.
In addition, in the embodiment of the present invention, one or more target pictures may be set for the same sign-in task, and/or one or more target location information may be set, so if it is detected that a picture matching with at least one target picture exists in the image data, and an error between the real-time positioning information and at least one target location information is within a preset error range, a sign-in success for the sign-in task may be confirmed.
In the embodiment of the invention, on the basis of the check-in based on the geographic positioning information in the prior art, the AR technology is utilized to solve the check-in result error caused by the drift error of the positioning information, and the check-in accuracy is improved, so that a user can finish the check-in only by entering the store of the merchant.
Optionally, in an embodiment of the present invention, when acquiring image data, the step 110 may further include:
Step 111, receiving a check-in instruction for the check-in task, acquiring check-in feature information corresponding to the check-in task, and displaying the check-in feature information to a user to prompt the user to control a scanning range of the AR device; the sign-in characteristic information is obtained according to the target picture.
And step 112, acquiring image data obtained through scanning by the AR equipment.
In practical application, when the AR equipment is used for scanning, the area which can be scanned at the same time is limited, so that the scanning process of the AR equipment takes a long time, and the check-in efficiency is affected. Therefore, in the embodiment of the invention, in order to avoid excessive invalid scanning by a user and improve the sign-in efficiency, at least one piece of sign-in characteristic information related to the target picture can be determined based on the target picture, and the sign-in characteristic information is displayed to the user so as to prompt the sign-in user to select a reasonable scanning range of the AR equipment, and then the image data obtained through the AR equipment scanning can be obtained. The content specifically included in the sign-in feature information may be set in a customized manner according to requirements, which is not limited in this embodiment of the present invention.
For example, assuming that the target picture for the current check-in task is a picture inside a business, the check-in feature information may be set to include a distinct item feature of the corresponding business store and existing in the target picture, such as a picture inside the business, a special table, and so on.
If the sign-in feature information corresponding to the current sign-in task is a picture inside a business, after the user arrives at a designated business store, the sign-in feature information (such as a picture inside the store) can be searched for in the corresponding store according to the prompt, so that the scanning range of the AR equipment is controlled to be a range including the corresponding sign-in feature information, and further image data obtained through scanning of the corresponding AR equipment is obtained.
In addition, in the embodiment of the present invention, for the chain stores and the allied stores, a plurality of stores with the same merchant name are set at different positions, so as to avoid that the user confuses the store positions when checking in to cause a check-in failure.
Optionally, in an embodiment of the present invention, the sign-in feature information does not completely include the target picture.
In practical application, since the AR camera performs verification on the real environment, whether the scanned object is a pattern contained in a picture or an object in the real environment cannot be distinguished at present, so that the AR device may perform verification on the real scene in a situation that a user cheats.
For example, if the user takes the target picture of the sign-in task in advance and places the target picture in any place in the real environment, the image data obtained by scanning the AR device will include the corresponding target picture, and the sign-in is successful when the error between the real-time positioning information and the target position information is within the preset error range. That is, based on this situation, an App is required to verify the positioning information.
Therefore, in the embodiment of the invention, the possibility of cheating caused by directly acquiring the target picture by the user is avoided, and the condition that the sign-in characteristic information is not completely contained in the target picture according to the determination of the target picture can be set. In other words, the user can acquire the sign-in characteristic information in advance and the sign-in characteristic information is not a complete target picture, so that the possibility of cheating of the user can be effectively reduced while prompting is carried out, and the accuracy of the sign-in result is further improved.
Further, the sign-in characteristic information can also be set to include descriptive information of the articles in the merchant store, not photos of the articles, which further avoids user cheating to a certain extent. Because some users can also pass the verification by holding the picture at any place within the positioning error range after obtaining the picture. However, this is only a possible case, and in the actual usage scenario, only a small number of users take this action from the general behavior of the users.
And after the verification of the real-time positioning information is passed, namely the error of the real-time positioning information and the target position information is in a preset error range, if no picture matched with the target picture exists in the image data, the sign-in failure is caused, the user cheating condition can be basically judged, and the user can be prompted to sign in to a correct shop.
When the picture verification and the positioning information verification pass, the sign-in is successful, and other subsequent tasks can be triggered, such as picking rewards (e.g. gold coins, merchant coupons, etc.) of the corresponding sign-in tasks on an App.
Referring to fig. 2, in an embodiment of the present invention, the target picture includes a first preset pattern for the real world and a second preset pattern for the virtual object, and whether there is a picture matching the target picture in the image data may be determined by:
Step S1, extracting a first pattern area for the real world and a second pattern area for a virtual object, which are contained in any picture in the image data;
step S2, if the first pattern area is matched with the first preset pattern and the second pattern area is matched with the second preset pattern, confirming that a picture matched with a target picture exists in the image data.
In practical application, in order to further improve accuracy of the picture verification result and improve accuracy of the sign-in result, complexity of picture verification can be further improved, and when setting the target picture, the target picture including a first preset pattern for the real world and a second preset pattern for the virtual object can be set based on AR technology in combination with virtual information and the real world. Wherein the virtual object may be any virtual object set through AR technology, such as a virtual animation, a virtual icon, a virtual special effect, etc.
Then, when the picture is verified, verification can be performed on the second preset pattern corresponding to the first preset pattern. Specifically, for any picture in the image data, a first pattern area for the real world and a second pattern area for the virtual object, which are included in the picture, are extracted, and if the first pattern area matches the first preset pattern and the second pattern area matches the second preset pattern, it is possible to confirm that a picture matching a target picture exists in the image data.
Moreover, in the embodiment of the present invention, the first pattern area for the real world and the second pattern area for the virtual object included in the picture may be extracted in any available manner, which is not limited to the embodiment of the present invention. For example, a first pattern region for the real world, and a second pattern region for the virtual object, which are included in each picture in the image data, may be extracted directly by the AR device while scanning, and so on.
Referring to fig. 2, in an embodiment of the present invention, whether there is a picture matching a target picture in image data may be determined by:
step S3, aiming at any picture in the image data, acquiring a third pattern area aiming at the real world and contained in the picture, and a virtual object added by the AR equipment when the picture is obtained by scanning;
step S4, if the third pattern area is matched with the target picture, and the AR equipment adds a virtual object consistent with a preset virtual object, confirming that a picture matched with the target picture exists in the image data; the preset virtual object is a virtual object preset for the sign-in task.
In addition, in order to further increase the complexity of the picture verification and to increase the accuracy of the check-in result, at least one preset virtual object may be set as the check-in verification condition when the check-in verification condition of the target picture, the target position information, and the like is set, and only the pattern for the real world may be included in the target picture at this time.
Then, when performing picture verification, for any picture in the image data, a third pattern area for the real world, which is contained in the picture, and a virtual object added by the AR device when scanning to obtain the picture, can be acquired; if the third pattern area is matched with the target picture and the AR equipment adds a virtual object consistent with a preset virtual object, confirming that a picture matched with the target picture exists in corresponding image data; the preset virtual object is a virtual object preset for the sign-in task.
Of course, in the embodiment of the present application, if the target picture includes the pattern for the real world and the pattern for the virtual object at the same time, at this time, the corresponding picture is directly matched with the target picture without acquiring the third pattern area for the real world included in the picture, and the AR device adds the virtual object to be consistent with the preset virtual object, so that it may be confirmed that the picture matched with the target picture exists in the image data.
In the embodiment of the present invention, whether the image data has the picture matching the target picture may also be determined by at least one of the two modes, or a combination of the two modes, which is not limited to the embodiment of the present invention.
In the embodiment of the invention, the problems of inaccurate sign-in result and inaccurate online-offline connection effect caused by positioning deviation are avoided by combining AR and positioning sign-in. In addition, the user can search specific articles in the sign-in places such as physical shops in the sign-in process, so that the participation of the user and the shop cognition of merchants are enhanced, and the online and offline connection effect can be effectively improved.
Referring to fig. 3, a schematic structural diagram of a check-in device according to an embodiment of the present invention is shown.
The sign-in device of the embodiment of the invention comprises: a data acquisition module 210, a check-in confirmation module 220.
The functions of the modules and the interaction relationship between the modules are described in detail below.
The data acquisition module 210 is configured to receive a check-in instruction for a check-in task, and acquire image data obtained by scanning the AR device and current real-time positioning information;
a check-in confirmation module 220, configured to confirm that the check-in for the check-in task is successful if a picture matching with the target picture exists in the image data and an error between the real-time positioning information and the target position information is within a preset error range; the target picture is a verification picture preset for the check-in task, and the target position information is position information preset for the check-in task.
Referring to fig. 4, in an embodiment of the present invention, the data obtaining module 210 may further include:
the sign-in feature prompting sub-module 211 is configured to receive a sign-in instruction for the sign-in task, obtain sign-in feature information corresponding to the sign-in task, and display the sign-in feature information to a user to prompt the user to control a scanning range of the AR device; the sign-in characteristic information is obtained according to the target picture.
An image data acquisition sub-module 212, configured to acquire image data obtained by scanning the AR device.
Optionally, in an embodiment of the present invention, the sign-in feature information does not completely include the target picture.
Referring to fig. 4, in an embodiment of the present invention, the apparatus may further include:
a first image data processing module 230, configured to extract, for any picture in the image data, a first pattern area for the real world and a second pattern area for the virtual object, which are included in the picture;
the first picture verification confirming module 240 is configured to confirm that a picture matching the target picture exists in the image data if the first pattern region matches the first preset pattern and the second pattern region matches the second preset pattern.
Referring to fig. 4, in an embodiment of the present invention, the apparatus may further include:
a second image data processing module 250, configured to obtain, for any picture in the image data, a third pattern area for the real world included in the picture, and a virtual object added by the AR device when the picture is obtained by scanning;
a second picture verification confirming module 260, configured to confirm that a picture matching the target picture exists in the image data if the third pattern area matches the target picture and the AR device adds a virtual object consistent with a preset virtual object; the preset virtual object is a virtual object preset for the sign-in task.
The sign-in device provided by the embodiment of the present invention can implement each process implemented in the method embodiments of fig. 1 to 2, and in order to avoid repetition, a detailed description is omitted here.
Fig. 5 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention.
The electronic device 300 includes, but is not limited to: radio frequency unit 301, network module 302, audio output unit 303, input unit 304, sensor 305, display unit 306, user input unit 307, interface unit 308, memory 309, processor 310, and power supply 311. It will be appreciated by those skilled in the art that the electronic device structure shown in fig. 5 is not limiting of the electronic device and that the electronic device may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. In the embodiment of the invention, the electronic equipment comprises, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer and the like.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 301 may be used to receive and send information or signals during a call, specifically, receive downlink data from a base station, and then process the downlink data with the processor 310; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 301 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 301 may also communicate with networks and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user through the network module 302, such as helping the user to send and receive e-mail, browse web pages, and access streaming media, etc.
The audio output unit 303 may convert audio data received by the radio frequency unit 301 or the network module 302 or stored in the memory 309 into an audio signal and output as sound. Also, the audio output unit 303 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the electronic device 300. The audio output unit 303 includes a speaker, a buzzer, a receiver, and the like.
The input unit 304 is used to receive an audio or video signal. The input unit 304 may include a graphics processor (Graphics Processing Unit, GPU) 3041 and a microphone 3042, the graphics processor 3041 processing image data of still pictures or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 306. The image frames processed by the graphics processor 3041 may be stored in the memory 309 (or other storage medium) or transmitted via the radio frequency unit 301 or the network module 302. The microphone 3042 may receive sound, and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 301 in the case of a telephone call mode.
The electronic device 300 further comprises at least one sensor 305, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 3061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 3061 and/or the backlight when the electronic device 300 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for recognizing the gesture of the electronic equipment (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; the sensor 305 may further include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described herein.
The display unit 306 is used to display information input by a user or information provided to the user. The display unit 306 may include a display panel 3061, and the display panel 3061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 307 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 307 includes a touch panel 3071 and other input devices 3072. The touch panel 3071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 3071 or thereabout the touch panel 3071 using any suitable object or accessory such as a finger, stylus, or the like). The touch panel 3071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 310, and receives and executes commands sent by the processor 310. In addition, the touch panel 3071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 307 may include other input devices 3072 in addition to the touch panel 3071. Specifically, other input devices 3072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 3071 may be overlaid on the display panel 3061, and when the touch panel 3071 detects a touch operation thereon or thereabout, the touch operation is transmitted to the processor 310 to determine a type of touch event, and then the processor 310 provides a corresponding visual output on the display panel 3061 according to the type of touch event. Although in fig. 5, the touch panel 3071 and the display panel 3061 are two independent components for implementing the input and output functions of the electronic device, in some embodiments, the touch panel 3071 and the display panel 3061 may be integrated to implement the input and output functions of the electronic device, which is not limited herein.
The interface unit 308 is an interface to which an external device is connected to the electronic apparatus 300. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 308 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 300 or may be used to transmit data between the electronic apparatus 300 and an external device.
Memory 309 may be used to store software programs as well as various data. The memory 309 may mainly include a storage program area that may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 309 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 310 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 309, and calling data stored in the memory 309, thereby performing overall monitoring of the electronic device. Processor 310 may include one or more processing units; preferably, the processor 310 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 310.
The electronic device 300 may also include a power supply 311 (e.g., a battery) for powering the various components, and preferably the power supply 311 may be logically coupled to the processor 310 via a power management system that performs functions such as managing charge, discharge, and power consumption.
In addition, the electronic device 300 includes some functional modules, which are not shown, and will not be described herein.
Preferably, the embodiment of the present invention further provides an electronic device, including: the processor 310, the memory 309, and a computer program stored in the memory 309 and capable of running on the processor 310, where the computer program when executed by the processor 310 implements the processes of the foregoing sign-in method embodiment, and achieves the same technical effects, and is not repeated herein.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, realizes the processes of the sign-in method embodiment and can achieve the same technical effects, and in order to avoid repetition, the description is omitted. Wherein the computer readable storage medium is selected from Read-Only Memory (ROM), random access Memory (RandomAccess Memory, RAM), magnetic disk or optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (8)

1. A sign-in method, comprising:
receiving a sign-in instruction aiming at a sign-in task, and acquiring image data obtained through AR equipment scanning and current real-time positioning information;
if the image data contains a picture matched with the target picture, and the error of the real-time positioning information and the target position information is in a preset error range, confirming that the check-in for the check-in task is successful;
the target picture is a verification picture preset for the sign-in task, and the target position information is position information preset for the sign-in task;
the target picture comprises a first preset pattern aiming at the real world and a second preset pattern aiming at the virtual object, and before the step of confirming the check-in success aiming at the check-in task, the target picture further comprises:
extracting a first pattern area for the real world and a second pattern area for a virtual object contained in the picture for any picture in the image data;
if the first pattern area is matched with the first preset pattern and the second pattern area is matched with the second preset pattern, confirming that a picture matched with a target picture exists in the image data;
Or, before the step of confirming that the check-in for the check-in task is successful, further comprising:
for any picture in the image data, acquiring a third pattern area for the real world contained in the picture and a virtual object added by the AR equipment when the picture is obtained by scanning;
if the third pattern area is matched with the target picture and the AR equipment adds a virtual object consistent with a preset virtual object, confirming that a picture matched with the target picture exists in the image data;
the preset virtual object is a virtual object preset for the sign-in task.
2. The method according to claim 1, wherein the step of receiving a check-in instruction for an arbitrary check-in task, acquiring image data scanned by an AR device, comprises:
receiving a check-in instruction aiming at the check-in task, acquiring check-in characteristic information corresponding to the check-in task, and displaying the check-in characteristic information to a user so as to prompt the user to control the scanning range of the AR equipment;
acquiring image data obtained by scanning the AR equipment;
the sign-in characteristic information is obtained according to the target picture.
3. The method of claim 2, wherein the check-in feature information does not fully contain the target picture.
4. A sign-in device, comprising:
the data acquisition module is used for receiving a sign-in instruction aiming at a sign-in task and acquiring image data obtained through AR equipment scanning and current real-time positioning information;
the sign-in confirmation module is used for confirming that sign-in for the sign-in task is successful if the image data contains a picture matched with the target picture and the error of the real-time positioning information and the target position information is in a preset error range;
the target picture is a verification picture preset for the sign-in task, and the target position information is position information preset for the sign-in task;
the target picture comprises a first preset pattern for the real world and a second preset pattern for a virtual object, and before the step of confirming that the check-in for the check-in task is successful, the device further comprises:
a first image data processing module, configured to extract, for any picture in the image data, a first pattern area for the real world and a second pattern area for the virtual object, which are included in the picture;
A first picture verification confirming module, configured to confirm that a picture matching a target picture exists in the image data if the first pattern region matches the first preset pattern and the second pattern region matches the second preset pattern;
or, before the step of confirming that the check-in for the check-in task is successful, the apparatus further includes:
the second image data processing module is used for acquiring a third pattern area for the real world contained in any picture in the image data, and a virtual object added by the AR equipment when the picture is obtained by scanning;
a second picture verification confirming module, configured to confirm that a picture matching the target picture exists in the image data if the third pattern area matches the target picture and the AR device adds a virtual object consistent with a preset virtual object;
the preset virtual object is a virtual object preset for the sign-in task.
5. The apparatus of claim 4, wherein the data acquisition module comprises:
the sign-in feature prompting sub-module is used for receiving a sign-in instruction for the sign-in task, acquiring sign-in feature information corresponding to the sign-in task, and displaying the sign-in feature information to a user so as to prompt the user to control the scanning range of the AR equipment;
An image data acquisition sub-module, configured to acquire image data obtained by scanning the AR device;
the sign-in characteristic information is obtained according to the target picture.
6. The apparatus of claim 5, wherein the check-in feature information does not fully include the target picture.
7. An electronic device, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the check-in method according to any one of claims 1 to 3.
8. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the steps of the check-in method according to any of claims 1 to 3.
CN202010414646.XA 2020-05-15 2020-05-15 Sign-in method, sign-in device, electronic equipment and storage medium Active CN111723843B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010414646.XA CN111723843B (en) 2020-05-15 2020-05-15 Sign-in method, sign-in device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010414646.XA CN111723843B (en) 2020-05-15 2020-05-15 Sign-in method, sign-in device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111723843A CN111723843A (en) 2020-09-29
CN111723843B true CN111723843B (en) 2023-09-15

Family

ID=72564543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010414646.XA Active CN111723843B (en) 2020-05-15 2020-05-15 Sign-in method, sign-in device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111723843B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767573A (en) * 2020-12-17 2021-05-07 宽衍(北京)科技发展有限公司 Fault card punching method, device, server and storage medium
CN113158085B (en) * 2021-03-31 2023-06-13 五八有限公司 Information switching processing method and device, electronic equipment and storage medium
CN114189550B (en) * 2021-11-30 2024-04-26 北京五八信息技术有限公司 Virtual positioning detection method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106920079A (en) * 2016-12-13 2017-07-04 阿里巴巴集团控股有限公司 Virtual objects distribution method and device based on augmented reality
CN108197993A (en) * 2017-12-29 2018-06-22 广州特逆特科技有限公司 A kind of method and server for being drained for businessman, excavating potential customers
CN108573406A (en) * 2018-04-10 2018-09-25 四川金亿信财务咨询有限公司 Advertisement marketing system and method on a kind of line based on verification of registering
US10157504B1 (en) * 2018-06-05 2018-12-18 Capital One Services, Llc Visual display systems and method for manipulating images of a real scene using augmented reality
CN109785458A (en) * 2019-01-14 2019-05-21 来奖(深圳)科技有限公司 A kind of user participates in the monitoring management method and device of marketing activity

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106920079A (en) * 2016-12-13 2017-07-04 阿里巴巴集团控股有限公司 Virtual objects distribution method and device based on augmented reality
CN108197993A (en) * 2017-12-29 2018-06-22 广州特逆特科技有限公司 A kind of method and server for being drained for businessman, excavating potential customers
CN108573406A (en) * 2018-04-10 2018-09-25 四川金亿信财务咨询有限公司 Advertisement marketing system and method on a kind of line based on verification of registering
US10157504B1 (en) * 2018-06-05 2018-12-18 Capital One Services, Llc Visual display systems and method for manipulating images of a real scene using augmented reality
CN109785458A (en) * 2019-01-14 2019-05-21 来奖(深圳)科技有限公司 A kind of user participates in the monitoring management method and device of marketing activity

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Space-sharing AR Interaction on Multiple Mobile Devices with a Depth Camera;Kaneto, Y,Komuro, T;2016 IEEE VIRTUAL REALITY CONFERENCE (VR);197-198 *
基于图像匹配的增强现实装配系统跟踪注册方法;张昊鹏等;计算机集成制造系统;全文 *
增强现实中的虚实配准方法研究;林晓明;中国优秀硕士学位论文全文数据库 信息科技辑;全文 *

Also Published As

Publication number Publication date
CN111723843A (en) 2020-09-29

Similar Documents

Publication Publication Date Title
US11449857B2 (en) Code scanning method, code scanning device and mobile terminal
CN111417028B (en) Information processing method, information processing device, storage medium and electronic equipment
CN111723843B (en) Sign-in method, sign-in device, electronic equipment and storage medium
CN109246360B (en) Prompting method and mobile terminal
CN108256853B (en) Payment method and mobile terminal
CN109240577B (en) Screen capturing method and terminal
CN108629579B (en) Payment method and mobile terminal
WO2020048392A1 (en) Application virus detection method, apparatus, computer device, and storage medium
CN107682359B (en) Application registration method and mobile terminal
CN110457571B (en) Method, device and equipment for acquiring interest point information and storage medium
CN109495638B (en) Information display method and terminal
CN109951889B (en) Internet of things network distribution method and mobile terminal
US20160323434A1 (en) Motion to Connect to Kiosk
CN108510267B (en) Account information acquisition method and mobile terminal
CN110659895A (en) Payment method, payment device, electronic equipment and medium
CN110909264A (en) Information processing method, device, equipment and storage medium
CN108196663B (en) Face recognition method and mobile terminal
CN107895108B (en) Operation management method and mobile terminal
CN113891166A (en) Data processing method, data processing device, computer equipment and medium
WO2021083086A1 (en) Information processing method and device
CN110677537B (en) Note information display method, note information sending method and electronic equipment
CN109547622B (en) Verification method and terminal equipment
CN108596600B (en) Information processing method and terminal
CN112486567B (en) Method and device for sending merging request of codes, electronic equipment and storage medium
CN111444491B (en) Information processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant