CN111723843A - Sign-in method, device, electronic equipment and storage medium - Google Patents

Sign-in method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111723843A
CN111723843A CN202010414646.XA CN202010414646A CN111723843A CN 111723843 A CN111723843 A CN 111723843A CN 202010414646 A CN202010414646 A CN 202010414646A CN 111723843 A CN111723843 A CN 111723843A
Authority
CN
China
Prior art keywords
picture
sign
check
task
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010414646.XA
Other languages
Chinese (zh)
Other versions
CN111723843B (en
Inventor
彭飞
刘文军
邓竹立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuba Co Ltd
Original Assignee
Wuba Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuba Co Ltd filed Critical Wuba Co Ltd
Priority to CN202010414646.XA priority Critical patent/CN111723843B/en
Publication of CN111723843A publication Critical patent/CN111723843A/en
Application granted granted Critical
Publication of CN111723843B publication Critical patent/CN111723843B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C1/00Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
    • G07C1/10Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people together with the recording, indicating or registering of other data, e.g. of signs of identity
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a sign-in method, a sign-in device, electronic equipment and a storage medium, and relates to the technical field of Internet. The method comprises the following steps: receiving a sign-in instruction aiming at the sign-in task, and acquiring image data obtained by scanning of AR equipment and current real-time positioning information; if a picture matched with a target picture exists in the image data and the error between the real-time positioning information and the target position information is within a preset error range, confirming that the sign-in for the sign-in task is successful; the target picture is a verification picture preset for the check-in task, and the target position information is position information preset for the check-in task. On the basis of signing in based on the geographical positioning information, the AR technology is utilized, the signing-in error caused by the drift error of the positioning information is solved, the user can finish signing in only when the user reaches an accurate position, and the precision of signing-in results is improved.

Description

Sign-in method, device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of internet technologies, and in particular, to a sign-in method, a sign-in apparatus, an electronic device, and a storage medium.
Background
In the field of O2O (Online To Offline ), how To combine Online with Offline by using technical means is an important key point for promoting informatization of Offline entity industries and improving efficiency of Offline entity industries by using a mobile technology. In the current technical field, a common technical means is that an App (Application program) end displays a check-in function, and instructs a user to go off-line a physical store to complete a check-in task on the App. Because the App can verify the geographic positioning information of the currently used App by the user, the user can be prevented from making fake, and the user can finish the activities only by going to a real entity shop. The sign-in activity of the type can lead the online App passenger flow to the offline physical store, reduces the passenger obtaining cost of the offline physical store in the traditional mode (issuing leaflets, entering newspaper, playing advertisements on a television), and greatly improves the storefront exposure rate and the turnover.
In the related art, an important means for check-in verification is to verify whether the current geolocation information (e.g., longitude and latitude) of the user App is consistent with expectations. However, due to drift and error in the positioning information, the check-in activity can be completed even if the user does not enter the physical store in many cases. Especially, in the case that a plurality of merchants are adjacent, the accuracy of the check-in result is not high due to difficult distinguishing, and the effect of on-line and off-line connection is difficult to achieve.
Disclosure of Invention
The embodiment of the invention provides a sign-in method, a sign-in device, electronic equipment and a storage medium, and aims to solve the problems that the existing sign-in result is low in accuracy rate, and the on-line and off-line connection effects are difficult to achieve expectations.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a check-in method, including:
receiving a sign-in instruction aiming at the sign-in task, and acquiring image data obtained by scanning of AR equipment and current real-time positioning information;
if a picture matched with a target picture exists in the image data and the error between the real-time positioning information and the target position information is within a preset error range, confirming that the sign-in for the sign-in task is successful;
the target picture is a verification picture preset for the check-in task, and the target position information is position information preset for the check-in task.
Optionally, the step of receiving a check-in instruction for an arbitrary check-in task and acquiring image data obtained by scanning through an AR device includes:
receiving a sign-in instruction aiming at the sign-in task, acquiring sign-in characteristic information corresponding to the sign-in task, and displaying the sign-in characteristic information to a user to prompt the user to control a scanning range of the AR equipment;
acquiring image data obtained by scanning the AR equipment;
and the sign-in characteristic information is determined according to the target picture.
Optionally, the check-in feature information does not completely include the target picture.
Optionally, the target picture includes a first preset pattern for a real world and a second preset pattern for a virtual object, and before the step of confirming that the check-in for the check-in task is successful, the method further includes:
extracting a first pattern region for a real world and a second pattern region for a virtual object, which are included in any one of the image data, from the image;
and if the first pattern area is matched with the first preset pattern and the second pattern area is matched with the second preset pattern, confirming that a picture matched with a target picture exists in the image data.
Optionally, before the step of confirming that the check-in for the check-in task is successful, the method further includes:
for any picture in the image data, acquiring a third pattern area aiming at the real world contained in the picture and a virtual object added by the AR equipment when the picture is obtained through scanning;
if the third pattern area is matched with the target picture and the added virtual object of the AR equipment is consistent with the preset virtual object, confirming that the picture matched with the target picture exists in the image data;
and the preset virtual object is a virtual object preset aiming at the check-in task.
In a second aspect, an embodiment of the present invention further provides a check-in apparatus, including:
the data acquisition module is used for receiving a sign-in instruction aiming at the sign-in task, and acquiring image data obtained by scanning of AR equipment and current real-time positioning information;
the check-in confirmation module is used for confirming that the check-in for the check-in task is successful if a picture matched with a target picture exists in the image data and the error between the real-time positioning information and the target position information is within a preset error range;
the target picture is a verification picture preset for the check-in task, and the target position information is position information preset for the check-in task.
Optionally, the data obtaining module includes:
the sign-in characteristic prompting submodule is used for receiving a sign-in instruction aiming at the sign-in task, acquiring sign-in characteristic information corresponding to the sign-in task, and displaying the sign-in characteristic information to a user so as to prompt the user to control the scanning range of the AR equipment;
the image data acquisition sub-module is used for acquiring image data obtained by scanning the AR equipment;
and the sign-in characteristic information is determined according to the target picture.
Optionally, the check-in feature information does not completely include the target picture.
Optionally, the apparatus further comprises:
the first image data processing module is used for extracting a first pattern region aiming at the real world and a second pattern region aiming at a virtual object, wherein the first pattern region aiming at the real world is contained in any picture in the image data;
and the first picture verification confirming module is used for confirming that a picture matched with a target picture exists in the image data if the first pattern area is matched with the first preset pattern and the second pattern area is matched with the second preset pattern.
Optionally, the apparatus further comprises:
the second image data processing module is used for acquiring a third pattern area aiming at the real world contained in any picture in the image data and a virtual object added by the AR equipment when the picture is obtained through scanning;
the second picture verification confirming module is used for confirming that a picture matched with the target picture exists in the image data if the third pattern area is matched with the target picture and the added virtual object of the AR equipment is consistent with a preset virtual object;
and the preset virtual object is a virtual object preset aiming at the check-in task.
In a third aspect, an embodiment of the present invention additionally provides an electronic device, including: memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of any of the check-in methods according to the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when executed by a processor, the computer program implements the steps of any one of the check-in methods according to the first aspect.
In the embodiment of the invention, on the basis of signing in based on the geographical positioning information, the signing-in error caused by the drift error of the positioning information is solved by utilizing the AR technology, so that the user can finish signing in only by reaching an accurate position, and the precision of the signing-in result is improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without inventive labor.
FIG. 1 is a flow chart of steps of a check-in method in an embodiment of the present invention;
FIG. 2 is a flow chart of steps in another check-in method in an embodiment of the present invention;
FIG. 3 is a schematic diagram of a check-in device according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of another check-in device according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a hardware structure of an electronic device in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart illustrating steps of a check-in method according to an embodiment of the present invention is shown.
And step 110, receiving a sign-in instruction for the sign-in task, and acquiring image data obtained by scanning of the AR equipment and current real-time positioning information.
Step 120, if a picture matched with a target picture exists in the image data and the error between the real-time positioning information and the target position information is within a preset error range, confirming that the sign-in for the sign-in task is successful; the target picture is a verification picture preset for the check-in task, and the target position information is position information preset for the check-in task.
As described above, in the technical field of O2O and the like, a common technical means is to instruct a user to go to an area such as a physical store to complete a check-in task by issuing a check-in task. The task sign-in is a common functional form on an application program (hereinafter referred to as App) under the O2O scenario, and the App sign-in function is taken as a carrier, and meanwhile, a task under an entity scenario is taken as a trigger condition. For example, the online sign-in task shows that a user needs to go to a storefront of a certain merchant to take an App to sign in, and the App limits sign-in activities by verifying geographical positioning information of the storefront of the store, so that the user can finish a specified task only by carrying out App operation on the storefront of the real merchant, and the combination of offline and online is realized. Because the current geographical positioning information of the user can be verified through the mobile phone, the computer and other electronic equipment carried by the user, the user can be prevented from making fake, and the user can finish the task sign-in only by going to a real entity shop for sign-in. The check-in activity of the type can lead the online user to the offline physical store, reduce the customer obtaining cost of the offline physical store in the traditional mode (issuing leaflets, entering newspaper, playing advertisements on a television) and improve the shop exposure rate and the turnover.
However, an important way of the task check-in approach is to verify that the current geolocation information (e.g., latitude and longitude) of the user is consistent with expectations. However, since there may be drift and error in the positioning information, the check-in activity is completed in many cases even if the user does not enter the physical store. Especially, in the case that a plurality of merchants are adjacent, the accuracy of the check-in result is poor due to difficult distinguishing, and the effect of online and offline connection is difficult to achieve.
In addition, AR (augmented reality) is a technology that fuses virtual information and the real world, and a variety of technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like are widely applied, and virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer is applied to the real world after being simulated, and the two kinds of information complement each other, thereby realizing "augmentation" of the real world. Each large mobile development platform (e.g., apple and google) separately launched its own AR development platform, so that developers can utilize AR technology at low cost. In particular, pictures in the real world under an AR camera can be recognized. This allows the iOS developer to easily recognize a picture of the real world using AR technology when developing the iOS application. Moreover, the Android development platform also supports the function of identifying real world pictures under the AR.
Therefore, in the embodiment of the invention, in order to improve the accuracy of the task check-in result on the basis of not increasing excessive cost, the task check-in can be carried out by combining real-time positioning information and an AR technology. Specifically, in the case of receiving a check-in instruction for a check-in task, image data scanned by the AR device, as well as current real-time positioning information, may be acquired.
Furthermore, in the embodiment of the present invention, the real-time positioning information may be obtained in any available manner, which is not limited to this embodiment of the present invention. For example, if a user checks in through an electronic device such as a mobile phone, the real-time location information of the electronic device used for checking in may be obtained as the real-time location information; and so on. Also, the AR device may include any one of devices having AR functions, such as an AR camera, AR glasses, and the like.
In addition, in the embodiment of the present invention, in order to facilitate task check-in, the electronic device for check-in may be set as a device having a positioning function and an AR function, and image data obtained by scanning through the AR device attached to the electronic device may be used, and current real-time positioning information may be obtained through the positioning function in the electronic device, which is not limited in the embodiment of the present invention.
For example, assuming that a check-in task is issued and checked-in through a certain App in the electronic device with the AR function, when the user finds a scene corresponding to the check-in a merchant store, the corresponding App may be opened, and a check-in button therein is clicked, so that the App may start the AR function.
The AR function in the embodiment of the invention is developed by adopting any available modes such as an AR development technology provided by a universal mobile development platform, and is a universal basic technology. For example, in an iOS development platform, the corresponding AR function can be easily developed by using the ARkit provided by the apple; similarly, the corresponding AR function can be developed by utilizing the AR development technology provided by the Google Anroid platform.
Further, the AR device may perform data acquisition and analysis on the image data that can be scanned, and perform comparative analysis according to a target picture that can be verified by the AR, to see whether the image data acquired based on the real scene corresponds to the target picture. If the image data is detected to have a picture matched with a target picture, and the error between the real-time positioning information and the target position information is within a preset error range, confirming that the sign-in for the sign-in task is successful; the target picture is a verification picture preset for the check-in task, and the target position information is position information preset for the check-in task. If the AR equipment scans and obtains that the image data contains the picture which can be corresponding to the target photo, the AR equipment can stop scanning continuously and perform subsequent operation. And if the AR device scans the current image data and does not have the picture which can be corresponding to the target picture, the scanning range can be adjusted along with the position of the AR device which is continuously moved by the user to continuously search until the picture matched with the target picture is obtained by scanning or the user quits the sign-in task.
In addition, since the AR technology can identify the picture in the real world included in the image data scanned by the AR device, the picture in the real world can be identified by using the AR technology to detect whether the picture is matched with the target picture. That is, the AR device can complete scanning of the image data and detection of whether a picture matching the target picture exists in the image data, and thus, the verification efficiency is improved without increasing excessive cost.
The picture matched with the target picture can be understood as a picture with the similarity to the target picture reaching a preset similarity, and/or the picture contains all contents in the target picture, and the like. The specific matching conditions can be set by self-definition according to requirements, and the embodiment of the invention is not limited. In addition, the target picture and the target position information can be set by self-definition according to requirements, and the embodiment of the invention is not limited.
For example, assuming that the check-in task is to a specific storefront, the target picture may be a specific picture of the specific storefront under the AR device, and the target location information may be geographical location information of the corresponding specific storefront.
The preset error range can also be set by self-definition according to requirements, for example, the preset error range can be set to use the target position information as an origin, and the positioning information within the range of 200 meters is considered to be in accordance with the requirements.
In the embodiment of the invention, in a check-in scene for merchants, a background system can be set for apps used for issuing check-in tasks, wherein information of each merchant in cooperation is stored, including but not limited to unique identification (merchantId) of the merchant, geographical positioning information (longtude, latitude) of the merchant, and a photo (merchantPicture) in a merchant store for AR verification. The geographical positioning information of the merchant can be understood as the target position information, and a photo in the merchant shop, which can be verified by the AR, can be understood as the target picture. Generally, the decoration style of each merchant is obviously different, so the photos collected from the merchant stores and available for AR recognition have uniqueness. Of course, the background system can also perform image recognition on the collected photos to identify whether repeated pictures exist. The picture acquisition means is not limited in the embodiment of the invention, and can be that the picture is uploaded to a background system after being photographed by a merchant, or the picture is acquired by an App service providing company dispatching a professional to photograph, and the like.
When the user signs in, the user can request from a background system of the corresponding App according to the sign-in task to obtain information of a merchant corresponding to the corresponding sign-in task, wherein the information includes but is not limited to a unique identifier of the merchant, geographical positioning information of the merchant, and a photo in a shop of the merchant and available for AR verification. For use in subsequent stages of authentication and further interaction with background systems.
Moreover, in the embodiment of the present invention, one or more target pictures may be set for the same check-in task, and/or one or more target location information may be set, at this time, if it is detected that a picture matching at least one target picture exists in the image data, and an error between the real-time positioning information and at least one target location information is within a preset error range, a check-in success for the check-in task may be confirmed.
In the embodiment of the invention, on the basis of signing in based on the geographical positioning information in the prior art, by utilizing the AR technology, the signing result error caused by the positioning information drift error is solved, and the signing accuracy is improved, so that a user can finish signing in only by going to a shop of a merchant.
Optionally, in an embodiment of the present invention, when acquiring the image data, the step 110 may further include:
step 111, receiving a sign-in instruction for the sign-in task, acquiring sign-in characteristic information corresponding to the sign-in task, and displaying the sign-in characteristic information to a user to prompt the user to control a scanning range of the AR device; and the sign-in characteristic information is determined according to the target picture.
And step 112, acquiring image data obtained by scanning the AR equipment.
In practical application, when scanning is performed through the AR device, the area which can be scanned at the same time is limited, so that the scanning process of the AR device is time-consuming and affects the check-in efficiency. Therefore, in the embodiment of the invention, in order to avoid excessive invalid scanning by the user and improve the check-in efficiency, at least one piece of check-in feature information related to the target picture can be determined and obtained based on the target picture, and the check-in feature information is displayed for the user to prompt the check-in user to select a reasonable scanning range of the AR equipment, so that image data obtained by scanning through the AR equipment can be obtained. The content specifically included in the check-in feature information may be set by user according to requirements, and the embodiment of the present invention is not limited thereto.
For example, assuming that the target picture for the current check-in task is a photograph of the inside of the merchant, the check-in feature information may be set to include obvious item features of the corresponding merchant's store and present in the target picture, such as a picture of the inside of the merchant, a special table, and so on.
If the sign-in feature information corresponding to the current sign-in task is a picture inside a merchant, at this time, after the user arrives at a shop of the designated merchant, the sign-in feature information (such as a picture inside the shop) can be searched in the corresponding shop according to a prompt, the scanning range of the AR device is controlled to be a range including the corresponding sign-in feature information, and image data obtained through scanning of the corresponding AR device is obtained.
In addition, in the embodiment of the invention, for chain stores and franchised stores, a plurality of stores with the same merchant name are arranged at different positions, and in order to avoid the problem that the user confuses the store positions and fails to sign in when signing in, after receiving the sign-in instruction aiming at the sign-in task, the embodiment of the invention can also acquire the address information (such as the store address) corresponding to the sign-in task and show the address information to the user so as to prompt the sign-in user to accurately arrive at the sign-in address corresponding to the sign-in task.
Optionally, in this embodiment of the present invention, the check-in feature information does not completely include the target picture.
In practical applications, since the AR camera cannot recognize whether a scanned object is a pattern included in a picture or an object in a real environment at present for verification in the real environment, the verification of the real scene by the AR device may have a situation that a user cheats.
For example, if the user previously takes a target picture of a check-in task and places the target picture in any place under a real environment, image data scanned by the AR device will include the corresponding target picture, and the check-in will be successful when the error between the real-time positioning information and the target position information is within a preset error range. That is, based on this situation, the App is required to verify the positioning information.
Therefore, in the embodiment of the invention, the possibility of cheating caused by directly obtaining the target picture by the user is avoided, and the sign-in characteristic information determined according to the target picture does not completely contain the target picture. That is, at this time, the user may obtain the obtained sign-in feature information in advance, which is not a complete target picture, so that the possibility of cheating by the user can be effectively reduced while prompting is performed, and the accuracy of the sign-in result is further improved.
Furthermore, the sign-in characteristic information can be set to include description information of the articles in the shops of the merchants instead of photos of the articles, so that cheating of the users is further avoided to a certain extent. Because some users can pass verification by holding the picture at any place within the positioning error range after obtaining the picture. However, this is only a possible situation, and in an actual usage scenario, from the general behavior of the user, only a few users take the approach.
And after the real-time positioning information passes the verification, namely the error between the real-time positioning information and the target position information is within a preset error range, if no picture matched with the target picture exists in the image data, the sign-in failure can be basically judged as the user cheating condition, and the user can be prompted to check in to a correct shop.
When the picture verification and the positioning information verification pass, the check-in is successful, and other tasks can be triggered, such as obtaining the reward (such as gold coins, merchant coupons and the like) of the corresponding check-in task on the App.
Referring to fig. 2, in the embodiment of the present invention, the target picture includes a first preset pattern for a real world and a second preset pattern for a virtual object, and it may be determined whether a picture matching the target picture exists in the image data by:
a step S1 of extracting, for any picture in the image data, a first pattern region for the real world and a second pattern region for a virtual object included in the picture;
step S2, if the first pattern area matches the first preset pattern and the second pattern area matches the second preset pattern, it is determined that a picture matching the target picture exists in the image data.
In practical applications, in order to further improve the accuracy of the picture verification result and improve the accuracy of the check-in result, the complexity of the picture verification can be further improved, and then, when the target picture is set, the target picture including a first preset pattern for the real world and a second preset pattern for the virtual object can be set based on the AR technology in combination with the virtual information and the real world. The virtual object may be any kind of virtual object set by AR technology, such as a virtual animation, a virtual icon, a virtual special effect, and the like.
Then, when the picture verification is performed, the verification can be performed respectively for the first preset pattern and the second preset pattern. Specifically, for any picture in the image data, a first pattern region for the real world and a second pattern region for the virtual object, which are included in the picture, are extracted, and if the first pattern region matches with the first preset pattern and the second pattern region matches with the second preset pattern, it can be confirmed that a picture matching with a target picture exists in the image data.
Furthermore, in the embodiment of the present invention, the first pattern region for the real world and the second pattern region for the virtual object included in the picture may be extracted in any available manner, which is not limited to this embodiment of the present invention. For example, a first pattern region for the real world and a second pattern region for the virtual object included in each picture in the image data may be extracted while scanning directly by the AR device, and so on.
Referring to fig. 2, in the embodiment of the present invention, whether a picture matching a target picture exists in image data may be determined by:
step S3, for any picture in the image data, acquiring a third pattern area for the real world included in the picture, and a virtual object added by the AR device when the picture is obtained by scanning;
step S4, if the third pattern area matches the target picture, and the AR device addition virtual object is consistent with a preset virtual object, determining that a picture matching the target picture exists in the image data; and the preset virtual object is a virtual object preset aiming at the check-in task.
In addition, in order to further improve the complexity of the picture verification to improve the accuracy of the check-in result, at least one preset virtual object may be further set as the check-in verification condition when the check-in verification condition such as the target picture, the target location information, etc. is set, and at this time, only the pattern for the real world may be included in the target picture.
Then, at this time, when performing picture verification, for any picture in the image data, a third pattern region for the real world included in the picture and a virtual object added by the AR device when the picture is obtained by scanning may be obtained; if the third pattern area is matched with the target picture and the added virtual object of the AR equipment is consistent with the preset virtual object, determining that a picture matched with the target picture exists in corresponding image data; and the preset virtual object is a virtual object preset aiming at the check-in task.
Of course, in the embodiment of the present application, if the target picture is set to include both the pattern for the real world and the pattern for the virtual object, at this time, the third pattern region for the real world included in the picture is not required to be acquired, and the corresponding picture is directly matched with the target picture, and the AR device adds the virtual object to be consistent with the preset virtual object, it may be confirmed that the picture matching with the target picture exists in the image data.
In the embodiment of the present invention, it may also be determined whether a picture matching the target picture exists in the image data by at least one of the two manners or a combination of the two manners, which is not limited to this embodiment of the present invention.
In the embodiment of the invention, by combining AR and positioning check-in, the problems of inaccurate check-in result and inaccurate on-line and off-line connection effect caused by positioning deviation are avoided. In addition, the user can search specific articles in the check-in places such as the entity stores in the check-in process, so that the participation sense of the user and store cognition of merchants are enhanced, and the online and offline connection effect can be effectively improved.
Referring to fig. 3, a schematic structural diagram of a check-in device in an embodiment of the present invention is shown.
The check-in device of the embodiment of the invention comprises: a data acquisition module 210 and a check-in confirmation module 220.
The functions of the modules and the interaction relationship between the modules are described in detail below.
The data acquisition module 210 is configured to receive a sign-in instruction for the sign-in task, and acquire image data obtained by scanning through the AR device and current real-time positioning information;
the check-in confirming module 220 is configured to confirm that the check-in for the check-in task is successful if a picture matched with a target picture exists in the image data and an error between the real-time positioning information and the target position information is within a preset error range; the target picture is a verification picture preset for the check-in task, and the target position information is position information preset for the check-in task.
Referring to fig. 4, in an embodiment of the present invention, the data obtaining module 210 further includes:
the sign-in feature prompting submodule 211 is configured to receive a sign-in instruction for the sign-in task, acquire sign-in feature information corresponding to the sign-in task, and display the sign-in feature information to a user to prompt the user to control a scanning range of the AR device; and the sign-in characteristic information is determined according to the target picture.
And an image data obtaining sub-module 212, configured to obtain image data obtained by scanning through the AR device.
Optionally, in this embodiment of the present invention, the check-in feature information does not completely include the target picture.
Referring to fig. 4, in an embodiment of the present invention, the apparatus may further include:
a first image data processing module 230, configured to extract, for any picture in the image data, a first pattern region for the real world and a second pattern region for the virtual object, which are included in the picture;
a first picture verification confirming module 240, configured to confirm that a picture matching a target picture exists in the image data if the first pattern region matches the first preset pattern and the second pattern region matches the second preset pattern.
Referring to fig. 4, in an embodiment of the present invention, the apparatus may further include:
a second image data processing module 250, configured to, for any picture in the image data, acquire a third pattern region for the real world included in the picture, and a virtual object added by the AR device when the picture is obtained by scanning;
a second picture verification confirming module 260, configured to confirm that a picture matching the target picture exists in the image data if the third pattern region matches the target picture and the AR device addition virtual object is consistent with a preset virtual object; and the preset virtual object is a virtual object preset aiming at the check-in task.
The check-in device provided by the embodiment of the invention can realize each process realized in the method embodiments of fig. 1 to fig. 2, and is not described herein again in order to avoid repetition.
Fig. 5 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention.
The electronic device 300 includes, but is not limited to: radio frequency unit 301, network module 302, audio output unit 303, input unit 304, sensor 305, display unit 306, user input unit 307, interface unit 308, memory 309, processor 310, and power supply 311. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 5 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 301 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 310; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 301 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 301 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 302, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 303 may convert audio data received by the radio frequency unit 301 or the network module 302 or stored in the memory 309 into an audio signal and output as sound. Also, the audio output unit 303 may also provide audio output related to a specific function performed by the electronic apparatus 300 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 303 includes a speaker, a buzzer, a receiver, and the like.
The input unit 304 is used to receive audio or video signals. The input Unit 304 may include a Graphics Processing Unit (GPU) 3041 and a microphone 3042, and the Graphics processor 3041 processes image data of a still picture or video obtained by an image capturing apparatus (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 306. The image frames processed by the graphic processor 3041 may be stored in the memory 309 (or other storage medium) or transmitted via the radio frequency unit 301 or the network module 302. The microphone 3042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 301 in case of the phone call mode.
The electronic device 300 also includes at least one sensor 305, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 3061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 3061 and/or the backlight when the electronic device 300 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 305 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 306 is used to display information input by the user or information provided to the user. The Display unit 306 may include a Display panel 3061, and the Display panel 3061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 307 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 307 includes a touch panel 3071 and other input devices 3072. The touch panel 3071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 3071 (e.g., operations by a user on or near the touch panel 3071 using a finger, a stylus, or any suitable object or attachment). The touch panel 3071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 310, and receives and executes commands sent by the processor 310. In addition, the touch panel 3071 may be implemented using various types, such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 307 may include other input devices 3072 in addition to the touch panel 3071. Specifically, the other input devices 3072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein.
Further, the touch panel 3071 may be overlaid on the display panel 3061, and when the touch panel 3071 detects a touch operation on or near the touch panel, the touch operation is transmitted to the processor 310 to determine the type of the touch event, and then the processor 310 provides a corresponding visual output on the display panel 3061 according to the type of the touch event. Although in fig. 5, the touch panel 3071 and the display panel 3061 are implemented as two separate components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 3071 and the display panel 3061 may be integrated to implement the input and output functions of the electronic device, which is not limited herein.
The interface unit 308 is an interface for connecting an external device to the electronic apparatus 300. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 308 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 300 or may be used to transmit data between the electronic apparatus 300 and the external device.
The memory 309 may be used to store software programs as well as various data. The memory 309 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 309 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 310 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 309 and calling data stored in the memory 309, thereby performing overall monitoring of the electronic device. Processor 310 may include one or more processing units; preferably, the processor 310 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 310.
The electronic device 300 may further include a power supply 311 (such as a battery) for supplying power to various components, and preferably, the power supply 311 may be logically connected to the processor 310 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 300 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, including: the processor 310, the memory 309, and the computer program stored in the memory 309 and capable of running on the processor 310, when being executed by the processor 310, implement the processes of the check-in method embodiments described above, and can achieve the same technical effects, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements each process of the above sign-in method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (12)

1. A check-in method, comprising:
receiving a sign-in instruction aiming at the sign-in task, and acquiring image data obtained by scanning of AR equipment and current real-time positioning information;
if a picture matched with a target picture exists in the image data and the error between the real-time positioning information and the target position information is within a preset error range, confirming that the sign-in for the sign-in task is successful;
the target picture is a verification picture preset for the check-in task, and the target position information is position information preset for the check-in task.
2. The method of claim 1, wherein the step of receiving a check-in instruction for an arbitrary check-in task and acquiring image data scanned by an AR device comprises:
receiving a sign-in instruction aiming at the sign-in task, acquiring sign-in characteristic information corresponding to the sign-in task, and displaying the sign-in characteristic information to a user to prompt the user to control a scanning range of the AR equipment;
acquiring image data obtained by scanning the AR equipment;
and the sign-in characteristic information is determined according to the target picture.
3. The method of claim 2, wherein the check-in feature information does not completely contain the target picture.
4. The method according to any one of claims 1-3, wherein the target picture comprises a first preset pattern for a real world and a second preset pattern for a virtual object, and before the step of confirming that the check-in for the check-in task is successful, further comprising:
extracting a first pattern region for a real world and a second pattern region for a virtual object, which are included in any one of the image data, from the image;
and if the first pattern area is matched with the first preset pattern and the second pattern area is matched with the second preset pattern, confirming that a picture matched with a target picture exists in the image data.
5. The method of any of claims 1-3, further comprising, prior to the step of confirming that the check-in for the check-in task was successful:
for any picture in the image data, acquiring a third pattern area aiming at the real world contained in the picture and a virtual object added by the AR equipment when the picture is obtained through scanning;
if the third pattern area is matched with the target picture and the added virtual object of the AR equipment is consistent with the preset virtual object, confirming that the picture matched with the target picture exists in the image data;
and the preset virtual object is a virtual object preset aiming at the check-in task.
6. A check-in apparatus, comprising:
the data acquisition module is used for receiving a sign-in instruction aiming at the sign-in task, and acquiring image data obtained by scanning of AR equipment and current real-time positioning information;
the check-in confirmation module is used for confirming that the check-in for the check-in task is successful if a picture matched with a target picture exists in the image data and the error between the real-time positioning information and the target position information is within a preset error range;
the target picture is a verification picture preset for the check-in task, and the target position information is position information preset for the check-in task.
7. The apparatus of claim 6, wherein the data acquisition module comprises:
the sign-in characteristic prompting submodule is used for receiving a sign-in instruction aiming at the sign-in task, acquiring sign-in characteristic information corresponding to the sign-in task, and displaying the sign-in characteristic information to a user so as to prompt the user to control the scanning range of the AR equipment;
the image data acquisition sub-module is used for acquiring image data obtained by scanning the AR equipment;
and the sign-in characteristic information is determined according to the target picture.
8. The apparatus of claim 7, wherein the check-in feature information does not completely contain the target picture.
9. The apparatus according to any one of claims 6-8, further comprising:
the first image data processing module is used for extracting a first pattern region aiming at the real world and a second pattern region aiming at a virtual object, wherein the first pattern region aiming at the real world is contained in any picture in the image data;
and the first picture verification confirming module is used for confirming that a picture matched with a target picture exists in the image data if the first pattern area is matched with the first preset pattern and the second pattern area is matched with the second preset pattern.
10. The apparatus according to any one of claims 6-8, further comprising:
the second image data processing module is used for acquiring a third pattern area aiming at the real world contained in any picture in the image data and a virtual object added by the AR equipment when the picture is obtained through scanning;
the second picture verification confirming module is used for confirming that a picture matched with the target picture exists in the image data if the third pattern area is matched with the target picture and the added virtual object of the AR equipment is consistent with a preset virtual object;
and the preset virtual object is a virtual object preset aiming at the check-in task.
11. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the check-in method according to any one of claims 1 to 5.
12. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the check-in method according to any one of claims 1 to 5.
CN202010414646.XA 2020-05-15 2020-05-15 Sign-in method, sign-in device, electronic equipment and storage medium Active CN111723843B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010414646.XA CN111723843B (en) 2020-05-15 2020-05-15 Sign-in method, sign-in device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010414646.XA CN111723843B (en) 2020-05-15 2020-05-15 Sign-in method, sign-in device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111723843A true CN111723843A (en) 2020-09-29
CN111723843B CN111723843B (en) 2023-09-15

Family

ID=72564543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010414646.XA Active CN111723843B (en) 2020-05-15 2020-05-15 Sign-in method, sign-in device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111723843B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767573A (en) * 2020-12-17 2021-05-07 宽衍(北京)科技发展有限公司 Fault card punching method, device, server and storage medium
CN113158085A (en) * 2021-03-31 2021-07-23 五八有限公司 Information switching processing method and device, electronic equipment and storage medium
CN114189550A (en) * 2021-11-30 2022-03-15 北京五八信息技术有限公司 Virtual positioning detection method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106920079A (en) * 2016-12-13 2017-07-04 阿里巴巴集团控股有限公司 Virtual objects distribution method and device based on augmented reality
CN108197993A (en) * 2017-12-29 2018-06-22 广州特逆特科技有限公司 A kind of method and server for being drained for businessman, excavating potential customers
CN108573406A (en) * 2018-04-10 2018-09-25 四川金亿信财务咨询有限公司 Advertisement marketing system and method on a kind of line based on verification of registering
US10157504B1 (en) * 2018-06-05 2018-12-18 Capital One Services, Llc Visual display systems and method for manipulating images of a real scene using augmented reality
CN109785458A (en) * 2019-01-14 2019-05-21 来奖(深圳)科技有限公司 A kind of user participates in the monitoring management method and device of marketing activity

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106920079A (en) * 2016-12-13 2017-07-04 阿里巴巴集团控股有限公司 Virtual objects distribution method and device based on augmented reality
CN108197993A (en) * 2017-12-29 2018-06-22 广州特逆特科技有限公司 A kind of method and server for being drained for businessman, excavating potential customers
CN108573406A (en) * 2018-04-10 2018-09-25 四川金亿信财务咨询有限公司 Advertisement marketing system and method on a kind of line based on verification of registering
US10157504B1 (en) * 2018-06-05 2018-12-18 Capital One Services, Llc Visual display systems and method for manipulating images of a real scene using augmented reality
CN109785458A (en) * 2019-01-14 2019-05-21 来奖(深圳)科技有限公司 A kind of user participates in the monitoring management method and device of marketing activity

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KANETO, Y,KOMURO, T: "Space-sharing AR Interaction on Multiple Mobile Devices with a Depth Camera", 2016 IEEE VIRTUAL REALITY CONFERENCE (VR), pages 197 - 198 *
张昊鹏等: "基于图像匹配的增强现实装配系统跟踪注册方法", 计算机集成制造系统 *
林晓明: "增强现实中的虚实配准方法研究", 中国优秀硕士学位论文全文数据库 信息科技辑 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767573A (en) * 2020-12-17 2021-05-07 宽衍(北京)科技发展有限公司 Fault card punching method, device, server and storage medium
CN113158085A (en) * 2021-03-31 2021-07-23 五八有限公司 Information switching processing method and device, electronic equipment and storage medium
CN113158085B (en) * 2021-03-31 2023-06-13 五八有限公司 Information switching processing method and device, electronic equipment and storage medium
CN114189550A (en) * 2021-11-30 2022-03-15 北京五八信息技术有限公司 Virtual positioning detection method and device, electronic equipment and storage medium
CN114189550B (en) * 2021-11-30 2024-04-26 北京五八信息技术有限公司 Virtual positioning detection method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111723843B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN111417028B (en) Information processing method, information processing device, storage medium and electronic equipment
EP3716132A1 (en) Code scanning method, code scanning device and mobile terminal
CN107943489B (en) Data sharing method and mobile terminal
CN111723843B (en) Sign-in method, sign-in device, electronic equipment and storage medium
WO2019127441A1 (en) Method for selecting emulated card, and mobile device
CN111737547A (en) Merchant information acquisition system, method, device, equipment and storage medium
US9621703B2 (en) Motion to connect to kiosk
CN109495638B (en) Information display method and terminal
CN110457571B (en) Method, device and equipment for acquiring interest point information and storage medium
CN109951889B (en) Internet of things network distribution method and mobile terminal
CN108510267B (en) Account information acquisition method and mobile terminal
CN109885490B (en) Picture comparison method and device
CN108307039B (en) Application information display method and mobile terminal
CN110659895A (en) Payment method, payment device, electronic equipment and medium
CN108196663B (en) Face recognition method and mobile terminal
CN110929238B (en) Information processing method and device
CN110677537B (en) Note information display method, note information sending method and electronic equipment
CN108596600B (en) Information processing method and terminal
CN110969434A (en) Payment method, server, terminal and system
CN111256678A (en) Navigation method and electronic equipment
CN111815319A (en) Graphic code processing method and electronic equipment
CN113051485B (en) Group searching method, device, terminal and storage medium
CN111147750B (en) Object display method, electronic device, and medium
CN111597468B (en) Social content generation method, device, equipment and readable storage medium
CN109542293B (en) Menu interface setting method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant