CN115668989B - Scene recognition method and electronic equipment - Google Patents

Scene recognition method and electronic equipment Download PDF

Info

Publication number
CN115668989B
CN115668989B CN202280004912.9A CN202280004912A CN115668989B CN 115668989 B CN115668989 B CN 115668989B CN 202280004912 A CN202280004912 A CN 202280004912A CN 115668989 B CN115668989 B CN 115668989B
Authority
CN
China
Prior art keywords
electronic equipment
event
mobile phone
preset
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202280004912.9A
Other languages
Chinese (zh)
Other versions
CN115668989A (en
Inventor
丁勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202110444566.3A external-priority patent/CN113115211A/en
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Publication of CN115668989A publication Critical patent/CN115668989A/en
Application granted granted Critical
Publication of CN115668989B publication Critical patent/CN115668989B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)

Abstract

The embodiment of the application provides a scene recognition method and electronic equipment, wherein the method comprises the following steps: the method comprises the steps of detecting that electronic equipment is located at a first position, and obtaining a position tag corresponding to the first position, wherein the position tag represents environment information obtained by the electronic equipment in a current place. And the electronic equipment compares the position label with position information preset in the first position to obtain a comparison result. If the comparison result shows that the position label is consistent with the preset position information, under the condition that any user operation is not received, the electronic equipment can be triggered to automatically execute a first action corresponding to the first position, the first action comprises that the electronic equipment operates the first application, a first interface is displayed, and the first interface can comprise a two-dimensional code. The method can simplify the operation of the electronic equipment so as to improve the user experience.

Description

Scene recognition method and electronic equipment
The present application claims priority of chinese patent application filed at 2021, 04, 23, with application number 202110444566.3, entitled "a multi-layered fence construction method, cloud server, and first terminal device", and priority of chinese patent application filed at 2021, 08, 04, with application number 202110892061.3, entitled "a scene recognition method, and electronic device", the entire contents of which are incorporated herein by reference.
Technical Field
The embodiment of the application relates to the technical field of electronic communication, in particular to a scene recognition method and electronic equipment.
Background
Along with the development of electronic technology, electronic products are more and more intelligent, and electronic equipment can be used in aspects of life. Taking a mobile phone as an example, people go out to take public transportation, such as subways, buses and the like, and the mobile phone can be used for paying the traffic cost; people can also transact various electronic membership cards in stores, and set the membership cards in mobile phones. For the mobile phone with the near field communication (Near Field Communication, NFC) function, an electronic access card can be arranged in the mobile phone, so that people can conveniently access certain areas. For example, an electronic access card of a community can be arranged on the mobile phone, so that people can conveniently enter and exit the community access.
In this case, subway fees are paid by using a mobile phone. When passengers enter a subway station to ride the subway, the passengers can ride the subway only through a subway gate, and the passengers enter and exit the subway station by brushing riding codes on mobile phones. Mobile phone brushTaking car code as an example, the mobile phone runs +.>Application, user click +.>And (3) traveling in the interface, clicking the subway further, and displaying the subway riding code by the mobile phone. Thus, the user can pass through the subway gate by brushing the subway riding code displayed on the mobile phone.
Mobile phone brushTaking car riding code as an exampleMobile phone operation->Application, user click +.>And clicking the 'card package' on the interface, clicking the 'traffic card' in the corresponding interface of the 'card package' displayed by the mobile phone, and clicking the 'riding code'. After these columns of operations, the cell phone displays the subway train code.
In the above example, after the mobile phone needs to perform the multi-step operation, the mobile phone can display the interface (i.e. the subway riding code) required by the user. That is, this approach is relatively cumbersome to operate.
Disclosure of Invention
The application provides a scene recognition method and electronic equipment, so that the operation of the electronic equipment is simplified, and the user experience is improved.
In order to achieve the technical purpose, the application adopts the following technical scheme:
in a first aspect, the present application provides a scene recognition method, which may include: detecting that the electronic equipment enters a first area, detecting a position tag corresponding to the first position when the electronic equipment is located at the first position in the first area after the electronic equipment is detected to enter the first area, wherein the position tag represents environment information acquired by the electronic equipment in a current place, and the first position is located in the first area. And the electronic equipment compares the position label with the position information corresponding to the preset position to obtain a comparison result. It should be noted that the preset position information represents environmental information of the electronic device in a preset scene of the first position, and the preset position is in the first area. That is, before the electronic device obtains the position tag, the electronic device has already obtained the preset position information, and the electronic device compares the position tag of the first position with the position information corresponding to the preset position, so as to determine whether the current first position of the electronic device is the preset position.
If the comparison result indicates a position tagAnd the position information corresponding to the preset position is consistent, namely the current first position of the electronic equipment is the preset position. Under the condition that any user operation is not received, the electronic equipment can be triggered to automatically execute a first action corresponding to the first position, the first action comprises that the electronic equipment operates a first application, a first interface is displayed, and the first interface can comprise a two-dimensional code. The first application can realize a plurality of functions, and when the electronic equipment runs the first application, the first application realizes the corresponding functions, and then the corresponding display interface is displayed. For example, the first application may be an electronic payment type application, where the first application isFor example, a->The electronic device has the functions of electronic payment, electronic collection and the like, and when the electronic device operatesWhen the electronic payment and the electronic collection are realized, the display interfaces of the electronic equipment are different. That is, after the electronic device runs the first application, the electronic device may display interfaces with different functions, where the first interface corresponds to a display interface of the first application in response to the first action.
On the other hand, if the comparison result indicates that the position tag does not match the position information corresponding to the preset position, it indicates that the current first position of the electronic device is not the preset position, and the electronic device may not perform the first action, that is, the electronic device does not respond at all.
The first action and the first position have a preset logic relationship. That is, the position tag of the electronic device at the first position corresponds to the preset position information, and the electronic device may automatically perform the first action. The preset logic relationship may be that a strong association rule between the first location and the first action is preset in the electronic device, so that the electronic device may automatically execute the first action at the first location.
It can be appreciated that if the electronic device is at the first location, the environmental information at the current location is obtained to generate the location tag. And if the position label is consistent with the preset position information, the current scene is consistent with the preset scene. Because the electronic device includes the preset logical relationship between the first position and the first action, and the current scene is consistent with the preset scene, the electronic device can execute the first action without receiving any user operation. So as to simplify the operation of the electronic device and thereby enhance the user experience.
Wherein, above-mentioned position label includes: the method comprises the steps that the electronic equipment acquires all first site addresses and signal intensity of each first site address, and the electronic equipment acquires at least one of a first cell identifier and signal intensity of each first cell identifier, a first satellite signal acquired by the electronic equipment and first illumination intensity acquired by the electronic equipment. The preset position information includes: and at least one of all second site addresses acquired by the electronic equipment, signal intensity of each second site address, second cell identifiers acquired by the electronic equipment, signal intensity of each second cell identifier, second satellite signals acquired by the electronic equipment and second illumination intensity acquired by the electronic equipment.
In one possible design manner of the first aspect, the method may further include: after detecting that the electronic device enters the first area, before detecting a position tag corresponding to the first position when the electronic device is located at the first position in the first area, the operations that the electronic device may further execute include: after the electronic equipment is detected to enter the first area, determining that the electronic equipment is located at a preset position in the first area. The method comprises the steps of obtaining the confidence coefficient of the electronic equipment executing a first event, wherein the confidence coefficient represents the execution degree of triggering the execution of the first event by user operation, the first event has a preset logic relationship with a first position, the first event has a relationship with a first action, and the electronic equipment runs a first application when executing the first event.
Further, when the confidence level characterizes that the execution of the first event is finished, the electronic device may acquire an execution result of the first event and check the execution result to obtain a check result. If the verification result indicates that the first event is successfully executed, a position label corresponding to the preset position of the electronic equipment is obtained, and the position label is transmitted to the cloud server, so that the cloud server can generate position information corresponding to the preset position according to the position label. The location tag characterizes environmental information acquired by the electronic device in a current place.
It should be noted that, the first event is that, in response to a user operation, the electronic device runs the first application to implement the corresponding function. The first action is an action which is automatically executed by the electronic equipment, wherein the electronic equipment determines that the position label is consistent with preset position information. Wherein the first action is automatically performed by the electronic device and the first event is an action performed by the electronic device in response to a user operation. For example, the first action is that the electronic device realizes the function of displaying the riding code, the electronic device determines that the position label is consistent with the preset position information, the first application is automatically operated, and a first interface is displayed, namely the electronic device displays the interface comprising the riding code two-dimensional code. When the electronic equipment is in the first position, the electronic equipment receives user operation, and responds to the user operation of the electronic equipment to operate the first application, and a first interface is displayed. In addition, the confidence level of the first event is the execution degree of the first event, namely, the electronic equipment starts to run the first application, and displays a first interface, and the confidence level of the first event indicates that the first event starts to be executed; when the electronic equipment is close to the code scanning port of the subway gate under the operation of the user, so that the user passes through the subway gate, the confidence of the first event indicates that the execution of the first event is completed.
In another possible design manner of the first aspect, the method may further include: after detecting that the electronic device enters the first area, before detecting a position tag corresponding to the first position when the electronic device is located at the first position in the first area, the operations that the electronic device may further execute include: after the electronic equipment is detected to enter the first area, determining that the electronic equipment is located at a preset position in the first area. The method comprises the steps of obtaining the confidence coefficient of the electronic equipment executing a first event, wherein the confidence coefficient represents the execution degree of triggering the execution of the first event by user operation, the first event has a preset logic relationship with a first position, the first event has a relationship with a first action, and the electronic equipment runs a first application when executing the first event.
Further, when the confidence level characterizes that the execution of the first event is finished, the electronic device may acquire an execution result of the first event and check the execution result to obtain a check result. If the verification result indicates that the first event is successfully executed, a position label corresponding to the preset position of the electronic equipment is obtained, and the electronic equipment can generate position information corresponding to the preset position according to the position label. The location tag characterizes environmental information acquired by the electronic device in a current place.
It can be understood that the electronic device is acquired according to the first event, and the electronic device executes the environmental information of the current location of the first event under the operation of the user. In one aspect, the electronic device may transmit the obtained environmental information to the cloud server, so that the cloud server generates preset environmental information. On the other hand, the electronic device can generate preset environmental information according to the environmental information.
In another possible design manner of the first aspect, before comparing the position tag with the position information corresponding to the preset position and obtaining the comparison result, the electronic device may further obtain the position information corresponding to the preset position of the electronic device.
When the electronic device is at the preset position, the position information can be acquired first. In this way, the electronic device can compare whether the position label accords with the preset position information according to the currently acquired position label so as to determine whether the electronic device is positioned at the preset position in the first area, thereby determining whether to automatically execute the first action.
In another possible design manner of the first aspect, when the confidence characterizes that the execution of the first event is finished, the method specifically includes: after the electronic device obtains the confidence coefficient, if the confidence coefficient is determined to be greater than a first preset threshold value and less than or equal to a second preset threshold value, the electronic device can be determined to start executing the first event; and if the confidence coefficient is larger than a second preset threshold value, determining that the electronic equipment executes the first event. Wherein the first preset threshold is less than the second preset threshold.
It can be appreciated that in the case where the electronic device executes the first event in response to the user operation, the electronic device may acquire the confidence level of the first event in real time, so that the electronic device determines the execution state of the first event according to the confidence level. Wherein the confidence may characterize the beginning of execution, completion of execution, failure of execution, etc. of the first event.
In another possible design manner of the first aspect, the execution result includes a current display interface of the electronic device.
It will be appreciated that the electronic device executing the first event may display a corresponding interface based on user operation. Therefore, the electronic device can judge the execution result of the first event by acquiring the current display interface. Alternatively, the electronic device may obtain the current display interface in real time, and the electronic device may determine the confidence level of the first event according to the display interface.
In another possible design manner of the first aspect, if the position tag includes: all first site addresses acquired by the electronic equipment, the signal intensity of each first site address, the first cell identification acquired by the electronic equipment and the signal intensity of the first cell identification.
The electronic device compares the position tag with position information preset in the first position to obtain a comparison result, which specifically includes: generating a first set of items including a first site and a first cell identity based on the location tag; generating a second item set of a second site and a second cell identifier based on preset position information; and according to the first item set and the second item set, calculating to obtain a comparison result of the position label and the preset position information.
The electronic device can calculate and obtain a comparison result of the position tag and the preset position information according to a preset operation relation.
In another possible design manner of the first aspect, if the first action is related to a scene that the two-dimensional code displayed by the electronic scanning device takes a subway through the subway gate, the two-dimensional code is a subway riding code or a two-dimensional code related to a health condition (such as a two-dimensional code indicating whether a vaccine is applied or a two-dimensional code indicating a health detection result). If the first action is to take the bus with the two-dimensional code displayed by the electronic device, the two-dimensional code can be a bus taking code.
In another possible design manner of the first aspect, before the obtaining the confidence coefficient of the electronic device executing the first event, the method may further include: the electronic device starts executing the first event in response to a user operation.
In another possible design manner of the first aspect, when the electronic device executes the first event in response to the user operation, the method specifically includes: the electronic device receives a first operation input by a user, and the first operation is used for indicating the electronic device to run a first application. Responding to the first operation, the electronic equipment runs a first application and displays a first interface, wherein the first interface comprises a two-dimensional code, and the two-dimensional code is a subway riding code. The electronic equipment receives the holding and turning operation of the user, and responds to the holding and turning operation, so that the display screen of the electronic equipment is opposite to the code brushing port of the subway gate.
The first event is specifically described by taking the subway station code brushing as an example.
In another possible design manner of the first aspect, the obtaining the confidence level of the electronic device executing the first event specifically includes: and responding to the first operation, and after the electronic equipment runs the first application and displays the first interface, acquiring the first interface by the electronic equipment. Further, the electronic device identifies the first interface to obtain a first identification result, and the electronic device can determine the confidence coefficient of the first event according to the first identification result. And the electronic device may continue to acquire the current display interface of the electronic device once after the first preset time period at each interval, and determine the confidence coefficient of the first event according to the current display interface.
In another possible design manner of the first aspect, the obtaining the confidence level of the electronic device executing the first event specifically includes: after the electronic equipment receives the holding and turning operation of the user, the electronic equipment acquires the current display interface after a second preset time length. And identifying the current display interface to obtain a second identification result, and determining the confidence coefficient of the first family according to the second identification result by the electronic equipment.
In a second aspect, the present application further provides an electronic device, including: the device comprises a detection module, a comparison module and an execution module.
The detection module is used for detecting that the electronic equipment enters the first area, detecting a position tag corresponding to the first position when the electronic equipment is located at the first position in the first area after the electronic equipment is detected to enter the first area, wherein the position tag represents environment information acquired by the electronic equipment in a current place, and the first position is located in the first area. The comparison module is used for comparing the position label with the position information corresponding to the preset position to obtain a comparison result. It should be noted that the preset position information represents environmental information of the electronic device in a preset scene of the first position, and the preset position is in the first area. It should be noted that the preset position information represents environmental information of the electronic device in a preset scene of the first position. The execution module is used for triggering the electronic device to automatically execute a first action corresponding to the first position if the comparison result shows that the position label is consistent with the preset position information and no user operation is received, wherein the first action comprises the electronic device running the first application and displaying a first interface, and the first interface can comprise a two-dimensional code.
In one possible design manner of the second aspect, the electronic device may further include: a determination module and a response module.
The determining module is used for determining that the electronic equipment is located at a preset position in the first area after the electronic equipment is detected to enter the first area. The response module is used for acquiring the confidence coefficient of the electronic equipment executing the first event, wherein the confidence coefficient characterizes the execution degree of the first event triggered by the user operation, the first event and the first position have a preset logic relationship, the first event has a relationship with the first action, and the electronic equipment runs the first application when executing the first event.
The confidence level characterizes the execution degree of the first event triggered by the user operation, wherein the first event has a preset logic relationship with the first position, the first event has a relationship with the first action, and the first application is run when the electronic equipment executes the first event. When the electronic device determines to start executing the first event according to the confidence coefficient of the first event, the electronic device may acquire an execution result of the first event and check the execution result to obtain a check result. If the verification result indicates that the first event is successfully executed, a position label corresponding to the first position is obtained, and the position label is transmitted to the cloud server, so that the cloud server can generate preset position information according to the position label. The location tag characterizes environmental information acquired by the electronic device in a current place.
In another possible design manner of the second aspect, the electronic device may further include: a determination module and a response module.
The determining module is used for determining that the electronic equipment is located at a preset position in the first area after the electronic equipment is detected to enter the first area. The response module is used for acquiring the confidence coefficient of the electronic equipment executing the first event, wherein the confidence coefficient characterizes the execution degree of the first event triggered by the user operation, the first event and the first position have a preset logic relationship, the first event has a relationship with the first action, and the electronic equipment runs the first application when executing the first event. When the electronic device determines to start executing the first event according to the confidence coefficient of the first event, the electronic device may acquire an execution result of the first event and check the execution result to obtain a check result. If the verification result indicates that the first event is successfully executed, a position tag corresponding to the first position is obtained, and the electronic equipment can generate preset position information according to the position tag. The location tag characterizes environmental information acquired by the electronic device in a current place.
In another possible design manner of the second aspect, the electronic device is located at the first position, a position tag corresponding to the first position is obtained, and before the position tag characterizes environmental information obtained by the electronic device in a current location, the electronic device may further obtain position information preset by the electronic device at the first position.
In another possible design manner of the second aspect, after the electronic device obtains the confidence coefficient, if it is determined that the confidence coefficient is greater than the first preset threshold value and less than or equal to the second preset threshold value, it may be determined that the electronic device starts executing the first event; and if the confidence coefficient is larger than a second preset threshold value, determining that the electronic equipment executes the first event. Wherein the first preset threshold is less than the second preset threshold.
In another possible design manner of the second aspect, the execution result includes a current display interface of the electronic device.
In another possible design manner of the second aspect, if the position tag includes: all first site addresses acquired by the electronic equipment, the signal intensity of each first site address, the first cell identification acquired by the electronic equipment and the signal intensity of the first cell identification.
The electronic device compares the position tag with position information preset in the first position to obtain a comparison result, which specifically includes: generating a first set of items including a first site and a first cell identity based on the location tag; generating a second item set of a second site and a second cell identifier based on preset position information; and according to the first item set and the second item set, calculating to obtain a comparison result of the position label and the preset position information.
In another possible design manner of the second aspect, if the first action is that the electronic device has a passing subway gate, the two-dimensional code is a subway train code. If the first action is that the electronic device has passed through the bus, the two-dimensional code may be a bus taking code.
In another possible design manner of the second aspect, before the obtaining the confidence coefficient of the electronic device executing the first event, the method may further include: the electronic device starts executing the first event in response to a user operation.
In another possible design manner of the second aspect, the electronic device is specifically configured to: the electronic device receives a first operation input by a user, and the first operation is used for indicating the electronic device to run a first application. Responding to the first operation, the electronic equipment runs a first application and displays a first interface, wherein the first interface comprises a two-dimensional code, and the two-dimensional code is a subway riding code. The electronic equipment receives the holding and turning operation of the user, and responds to the holding and turning operation, so that the display screen of the electronic equipment is opposite to the code brushing port of the subway gate.
The first event is specifically described by taking the subway station code brushing as an example.
In another possible design manner of the second aspect, the electronic device is specifically configured to: and responding to the first operation, and after the electronic equipment runs the first application and displays the first interface, acquiring the first interface by the electronic equipment. Further, the electronic device identifies the first interface to obtain a first identification result, and the electronic device can determine the confidence coefficient of the first event according to the first identification result. And the electronic device may continue to acquire the current display interface of the electronic device once after the first preset time period at each interval, and determine the confidence coefficient of the first event according to the current display interface.
In another possible design manner of the second aspect, the electronic device is specifically configured to: after the electronic equipment receives the holding and turning operation of the user, the electronic equipment acquires the current display interface after a second preset time length. And identifying the current display interface to obtain a second identification result, and determining the confidence coefficient of the first event according to the second identification result by the electronic equipment.
In a third aspect, the present application further provides an electronic device, including: one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions, which when executed by the electronic device, cause the electronic device to perform the method of the first aspect and any of its possible designs.
In a fourth aspect, the present application also provides a computer readable storage medium comprising computer instructions which, when run on a computer, cause the computer to perform the method of the first aspect and any of its possible designs.
In a fifth aspect, embodiments of the present application provide a computer program product which, when run on a computer, causes the computer to perform the method performed by the electronic device in the first aspect and any of its possible designs.
In a sixth aspect, embodiments of the present application provide a chip system that is applied to an electronic device. The system-on-chip includes one or more interface circuits and one or more processors; the interface circuit and the processor are interconnected through a circuit; the interface circuit is used for receiving signals from the memory of the electronic device and sending signals to the processor, wherein the signals comprise computer instructions stored in the memory; the computer instructions, when executed by a processor, cause an electronic device to perform the method of the first aspect and any of its possible designs described above.
It may be appreciated that the advantages achieved by the electronic device of the second aspect, the electronic device of the third aspect, the computer readable storage medium of the fourth aspect, the computer program product of the fifth aspect and the chip system of the sixth aspect provided in the present application may refer to the advantages as in the first aspect and any possible design manners thereof, and are not repeated herein.
Drawings
Fig. 1 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
fig. 2 is a flowchart of a scene recognition method according to an embodiment of the present application;
fig. 3A is a schematic diagram of a cloud server data mining flow provided in an embodiment of the present application;
Fig. 3B is a schematic flow chart of deleting fingerprint features by the cloud server according to an embodiment of the present application;
FIG. 4 is a flowchart of another scene recognition method according to an embodiment of the present application;
FIG. 5 is a flowchart of another scene recognition method according to an embodiment of the present application;
fig. 6 is an application scenario schematic diagram of a scenario recognition method provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a chip system according to an embodiment of the present application.
Detailed Description
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present embodiment, unless otherwise specified, the meaning of "plurality" is two or more.
For a mobile phone with NFC functionality, the mobile phone may generate a digital access card. Moreover, a plurality of digital access cards, such as a company access card, a community access card and the like, can be stored in the mobile phone. Thus, when the user needs to use the NFC function of the mobile phone, the user needs to select the correct access card on the mobile phone. When the mobile phone displays the digital entrance guard card of the company entrance guard card, a user can pass through the entrance guard of the company; when the mobile phone displays the digital access control of the community access control card, a user can pass through the access control of the community.
Specifically, two NFC cards, namely a "home access card" and a "company access card", are stored in the mobile phone. When the user is located at the entrance of the company and needs to pass through the entrance guard of the company, the user uses the NFC function of the mobile phone, and the user selects the 'entrance guard card of the company' on the mobile phone, so that the mobile phone can be used as the entrance guard card. The user will bring the mobile phone close to the entrance guard machine, which is triggered so that the user can access the entrance guard through the company.
The operation is complicated, and in order to simplify the operation, the user experience is improved. In one possible implementation, the cell phone may employ a geofence to establish a correspondence of the NFC card to the geographic location. For example, position information (such as a first place) when a user uses a "home access card" is collected, and a correspondence relationship is established. When the mobile phone detects that the user arrives at the first place again and detects that the user triggers the NFC function, the mobile phone displays the NFC card, namely the home entrance guard card.
It will be appreciated that this approach may simplify the step of the user selecting the NFC card, but may not be adaptable to switch NFC cards at other addresses than the company or home.
In another possible implementation, the mobile phone may obtain location information of the user, and if the mobile phone detects that the user is located at a subway station, the mobile phone may display a subway train code. So that the user can use the riding code to pass through the subway gate, and the operation of the user is simplified. It will be appreciated that this way, depending on the current location information of the handset, is provided with services (displaying ride codes) related to that location information to the user. However, the subway station is a wider place, and when the user is located in the subway station, the user may not be close to the subway gate; alternatively, the user is near the subway station and does not enter the subway station; that is, this method cannot accurately identify the position of the subway gate, and cannot determine whether the user needs the mobile phone to provide a service for displaying the riding code. If the user does not get close to the subway gate and does not want to pass through the subway gate, the mobile phone displays the subway train code, and the method simplifies the operation steps but does not provide good experience for the user.
The embodiment of the application provides a scene recognition method and electronic equipment, taking a mobile phone implementation scene recognition method as an example, the mobile phone can recognize the current scene, so that the mobile phone provides corresponding services for a user according to the current scene. For example, if the mobile phone determines that the current position information is a subway station and is close to a subway gate, the mobile phone displays a subway riding code. For another example, when the mobile phone detects that the current position information is a cell gate and the mobile phone is close to a cell gate inhibition machine, the mobile phone displays an NFC card of home gate inhibition.
Specifically, the mobile phone can identify current scene information through a plurality of sensors and the like, and detect the correlation between the current scene information and the service provided by the mobile phone. Thus, when the mobile phone detects that the current scene is again, the mobile phone displays the service.
For example, when the mobile phone collects information, scene information collected for many times: when the mobile phone is positioned at a subway station and is close to a subway gate; at this time, the mobile phone displays the subway two-dimensional code based on user operation. The mobile phone generates the relation between the scene information and the operation, and when the mobile phone is in the current scene (namely in a subway station and near a subway gate) next time, the mobile phone displays the subway two-dimensional code.
For another example, when the mobile phone collects information, scene information collected for many times is: the mobile phone is positioned in a coffee shop, and the mobile phone is close to the cash desk; at this time, the mobile phone displays the coffee shop membership card based on the user operation. Then the handset generates context information related to the operation and displays the coffee shop membership card when the handset is again in the current context (i.e. in the coffee shop and the handset is near the cash desk location).
The implementation of the examples of the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 1, a schematic structural diagram of an electronic device 100 according to an embodiment of the present application is provided. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an nfc chip module 170, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The electronic device 100 may implement NFC functionality through the NFC chip module 170, as well as an application processor. The operation modes of the NFC chip module 170 include:
point-to-point mode: data exchange, such as exchanging music, transferring files, etc., can be performed on two NFC enabled devices. The two devices are required to have NFC functions, NFC file transmission basically belongs to a abandoned transmission mode, and because the NFC transmission speed is very low, the NFC file transmission mode is only used for transmitting data with small information quantity such as single contact persons, websites and the like between mobile phones, and therefore few use scenes are used in daily life.
Card reader mode: can be used as a non-contact card reader. Information can be read from or written into the electronic tag, but because NFC electronic tags in the market cannot be used in a large area, more consumers recharge the physical bus card through the mode, but along with gradual realization of the virtual bus card function, dependence of people on the physical bus card becomes weaker and weaker, and virtual bus card service integrated into a mobile phone becomes more convenient.
Card simulation mode: the mobile payment system is equivalent to a mobile phone with NFC function to replace a large number of physical IC cards (bus cards, credit cards and the like) to realize the mobile payment function.
It should be noted that, the electronic device in the embodiment of the present application may be a mobile phone, a smart watch, a smart bracelet, a tablet computer, a desktop, a laptop, a handheld computer, a notebook, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook, a cellular phone, a personal digital assistant (personal digital assistant, PDA), an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, or a device including a touch screen, and the electronic device may control the device.
In the embodiment of the application, the electronic device is a mobile phone, and the method for identifying the scene is implemented on the mobile phone as an example.
Specifically, the mobile phone may identify a current scene and perform a corresponding action according to the current scene. For example, when the handset is in a subway station, the handset may identify the current scene, based on which the handset displays an interface including a "subway ride code". Before that, the mobile phone needs to determine the current scene corresponding to the action so as to execute the corresponding action after the current scene is identified later.
The mobile phone executes corresponding actions when the mobile phone is at the preset position. The mobile phone can count the scene information when executing the action and record the scene information. Based on statistics of scene information for a plurality of times, the mobile phone can determine the corresponding relation between the scene information and the action. Thus, when the mobile phone is at the preset position and the scene information is detected again, the mobile phone executes the action corresponding to the scene information.
Also for example, the mobile phone may interact with a cloud server (or referred to as a remote server) to obtain a fingerprint database of the cloud server, where the fingerprint database includes a plurality of fingerprint information. Each fingerprint information corresponds to a scene (e.g., a first scene), and the fingerprint information may include WIFI, cell, GNSS, light intensity information, and the like. Thus, when the mobile phone detects that the current environment accords with one of the fingerprint information in the fingerprint database, the mobile phone can determine an action strongly related to the current scene according to the fingerprint information which accords with the current environment and execute the action.
In addition, when the mobile phone is used as the device for collecting fingerprint information, the mobile phone is provided with the information collecting user authorization for reporting the cloud server, and the mobile phone can upload the collected position information and the action of the related service to the cloud server. After the cloud server acquires the acquired information uploaded by the plurality of devices, the cloud server can generate fingerprint information according to the acquired information uploaded by the mobile phone according to a preset algorithm, and the fingerprint information forms a fingerprint library.
First, a process of using a mobile phone as a terminal for collecting information, and transmitting the collected information to a cloud server will be described. Please refer to fig. 2, which is a flowchart of a mobile phone transmitting acquisition information to a cloud server. This process may also be referred to as a training phase of the scene recognition method, among other things.
As shown in fig. 2, the training phase includes steps 201-206.
Step 201: when the mobile phone is located at the first position, the confidence coefficient of the first event is obtained, and a preset logic relationship exists between the first event and the first position.
Wherein the confidence level of the first event is used for representing the execution degree of the first event.
It should be noted that the first event may be that the mobile phone runs the first application under the operation of the user, or the first event may be that the mobile phone receives a preset user operation, and a preset interface is displayed. Specifically, after the first event is performed by the mobile phone, the mobile phone may display at least one preset interface. For example, the first location is a first subway station, and the mobile executing the first event is the mobile running the first application and displaying an interface including a "subway ride code". Or, the first location is a first subway station, the mobile phone executing the first event is the mobile phone running the second application, the mobile phone displaying an interface including a "health code", and after the mobile phone displaying the interface including the "health code", the mobile phone running the first application, the mobile phone displaying an interface including a "subway riding code". By way of example, the first application may be an application having an electronic payment function, e.g., Etc.
In some implementations, the preset logical relationship may be determined by the handset based on a correlation of the first location with the first event. For example, the first location is a first subway station and the first event is the cell phone displaying an interface including a "subway ride code". And judging through a preset logic relationship, when the mobile phone determines that the first subway station is located, executing the first event can enable the user to pass through the subway gate, so that the user can take the subway. The requirement that the user has a subway to ride when the mobile phone is at the first subway station is preset logic, so that the mobile phone can determine that the mobile phone is currently at the first subway station, and whether the first event needs to be executed or not is required to be detected.
In other implementations, the mobile phone interacts with the cloud server, and determines, according to information fed back by the cloud server, that the mobile phone is currently in the first subway station, and whether the mobile phone executes the first event needs to be detected. Based on the data training stage, the mobile phone can determine the data training stage, and can acquire the data when the mobile phone executes the first action.
It should be noted that the first event has a preset logic relationship with the current location of the mobile phone, and it can also be understood that the current location (i.e., the first location) is a preset condition for the mobile phone to execute the first event. For the training phase, this preset logic relationship may be preset in the handset as a strong association rule (or called a strong rule).
The first location is an area where the handset can perform a number of actions under the control of the user when the handset is in the first location. For example, the mobile phone can run a video application under the control of a user and play a video; or the mobile phone runs the first application under the control of the user, so that the mobile phone finishes payment actions and the like. That is, when the handset is in the first position, it cannot be determined whether the handset is performing the first event and when the handset is performing the first event. Thus, a confidence level needs to be set. The confidence level is used to characterize the execution degree of the first event, such as whether the mobile phone executes the first event, whether the first event is completed, and the like. In another aspect, the set confidence level is a confidence level for the handset to identify the first event. Specifically, during the training phase, the mobile phone is in the first position, the mobile phone executes the first event, and the first event execution is completed (i.e., the mobile phone is trusted to execute the first event), and next, the mobile phone may employ related information, etc. If the handset does not perform the first event or the first event fails to perform (i.e., the handset is not trusted to perform the first event), the handset does not have to respond to the next action and thus is not a complete training process.
Specifically, the mobile phone can identify the confidence of the first event based on the data collected by the plurality of sensors and the like. For example, continuing with the first location being the first subway station, the first event is the cell phone displaying an interface including a "subway ride code". After determining that the mobile phone is running the first application, the mobile phone can acquire the display interface of the current display screen. If the display interface of the mobile phone is an interface including a subway riding code, the mobile phone can confirm that the first event starts to be executed. At this time, the confidence of the first event may be 1, which indicates that the mobile phone is executing the first event, i.e., that it is trusted for the mobile phone to execute the first event. It can be understood that when the mobile phone is in the first subway station, the first application is operated, and the mobile phone obtains the display interface of the current display screen. If the display interface of the mobile phone is not an interface including the subway riding code, the confidence of the first event can be confirmed to be 0, namely the mobile phone is not credible to execute the first event.
It can be understood that after the mobile phone displays the interface including the subway riding code, if the user approaches the mobile phone to the area of the subway gate for scanning the code, so that the mobile phone finishes the code scanning action, the acquired display interface may be an interface including prompt information of "inbound", "outbound", "update two-dimensional code" or "code scanning failure". That is, when the mobile phone acquires the display interface of the display screen, the method includes: the interface of the prompt information of "inbound", "outbound", "update two-dimensional code" or "sweep failure" may also indicate that the confidence level of the first event may be 1, which indicates that the mobile phone is executing the first event, that is, the mobile phone is trusted to execute the first event.
It should be appreciated that the mobile phone may obtain the display interface of the display screen in real time so that the mobile phone may determine the confidence level of the first event. Specifically, the mobile phone can determine whether to acquire the display interface of the display screen according to the currently running application. When the mobile phone determines that the current first application is running, the mobile phone can acquire the display interface of the display screen in real time (or at intervals of preset time).
Step 202: and judging whether the confidence coefficient is larger than a first preset threshold value. If the confidence is greater than the first preset threshold, execute step 203; if the confidence level is less than or equal to the first preset threshold, step 206 is performed.
The first preset threshold value can be used as a judging basis for judging whether the first event is executed, and the confidence coefficient of the first event is larger than the first preset threshold value, so that the mobile phone executes the first event or the mobile phone executes the first event when the mobile phone is located at the first position.
In other implementations, the step 203 may be further performed by setting the confidence level to be greater than or equal to a first preset threshold; the execution degree is smaller than the first preset threshold, and step 206 is executed.
Step 203: and determining an execution result of the first event, and checking the execution result to obtain a checking result.
Specifically, the execution results include the success of the execution of the first event (or understood to be the completion of the execution of the first event), and the failure of the execution of the first event. The determining the execution result of the first event may determine the execution result of the first event according to the data collected by the sensor in the mobile phone, or may obtain the information of the current display interface of the mobile phone to determine the execution result of the first event.
Illustratively, continuing to take as an example that the first event is the cell phone displaying an interface including "subway ride code". When the mobile phone displays an interface comprising a subway riding code, the confidence of the first event indicates that the first event is credible. And the user aims the display interface of the mobile phone at the code scanning port of the subway gate, so that the mobile phone finishes the first event. After the first event is completed by the handset, the display interface of the handset changes. After the display interface of the mobile phone is changed, the mobile phone obtains an execution result of the first event, namely the current display interface of the display screen of the mobile phone.
The mobile phone acquires an execution result of the first event, and determines a verification result of the first event by identifying a current display interface of a display screen of the mobile phone. The check result indicates that the first event was successfully executed or the check result indicates that the first event was not successfully executed.
It should be noted that, the execution result of the first event is determined according to the current display interface of the mobile phone display screen. In some implementations, when it is determined that the display screen of the mobile phone displays an interface including a "subway riding code", the mobile phone obtains a current display interface of the display screen once every a preset time interval elapses, so as to determine a verification result of the first event according to the display interface. In other implementations, when it is determined that the display screen of the mobile phone displays an interface including a subway riding code, and it is determined that the current display interface of the mobile phone is changed, the mobile phone obtains the current display interface of the display screen, so as to determine a verification result of the first event according to the display interface.
Step 204: and judging whether the first event is successfully executed according to the checking result. If the first event execution is successful, execute step 205; if the first event fails to execute, step 206 is performed.
Specifically, after the mobile phone identifies the current display interface of the display screen, if the display interface includes a prompt message of "inbound" or "outbound", it may be determined that the check result indicates that the first event is successfully executed, and step 205 is executed. If the display interface does not include the prompt information of "inbound" or "outbound", for example, the display interface includes the prompt information of "scan failure" or "update two-dimensional code", it may be determined that the check result indicates that the first event is not executed successfully (the first event is executed failed), and step 206 is executed.
Step 205: and acquiring a position label corresponding to the first position in the mobile phone, and uploading the position label corresponding to the first position to a cloud server.
Wherein the location tag may include: WIFI, cell, GNSS and light intensity information. If the mobile phone acquires the authority granted by the user, the mobile phone can take the acquired WIFI, cell, GNSS and light intensity information as the position tag of the first position after the first event is completed, and upload the position tag to the cloud server.
For example, WIFI refers to all site addresses (Basic Service Set Identifier, BSSID) searched by WIFI of the mobile phone during the process of executing the first event, and signal strength corresponding to each BSSID. The Cell refers to a Cell identifier acquired by the mobile phone and a signal strength of the Cell identifier when the mobile phone executes a first event process. The GNSS refers to satellite signals acquired by the mobile phone during the process of executing the first event. The light intensity signal refers to the illumination intensity of the environment where the mobile phone is located, as determined by the sensor (e.g., ambient light sensor) during the first event.
Taking the example that the first event is that the mobile phone displays an interface including a subway riding code. The location tag indicates that when the mobile phone is located near a subway gate in the first subway station (e.g., a position of the subway gate scanning a code or a position of the subway gate just passing through), the mobile phone uploads the collected location tag to the cloud server after the mobile phone successfully executes the first event.
Step 206: and does not respond.
Here, unresponsiveness means that the handset does not respond.
Specifically, the mobile phone determines that the confidence coefficient is smaller than or equal to the first preset threshold value, which indicates that the correlation degree between the first position and the first event is not high, and the mobile phone does not need to execute subsequent steps. And when the mobile phone determines that the verification result of the first event indicates that the execution of the first event fails, the first position and the first event do not need to be reported to the cloud server at present temporarily, and therefore position labels corresponding to the first position do not need to be acquired. At this time, the handset is also unresponsive.
It should be noted that, the plurality of mobile phones can all upload the position tag corresponding to the first position and the first event to the cloud server as the terminal. Taking the first position as a first subway station as an example, the first mobile phone can upload a position tag of the first subway station and a first event to the cloud server; the second mobile phone can also upload the position tag of the first subway station and the first event like the cloud server.
The cloud server may generate a fingerprint tag of the first location (i.e., unique corresponding information for the first location) according to the location tag corresponding to the first location.
It should be understood that the GNSS signal and the light intensity signal may be used to determine whether the current mobile phone is indoor or outdoor, and the Cell information and the WIFI information may be used to determine the area where the mobile phone is located at the first location.
In one possible implementation, the cloud server may use a data mining manner to locate information uploaded by multiple terminals. For example, an association rule algorithm (Apriori algorithm) may be used to perform data mining on the cell and WIFI information corresponding to the first location, for example, frequent item set mining may be performed on the cell and WIFI information. That is, when the Apriori algorithm is run, frequent item sets are obtained in frequent mode. The frequent pattern is a set, sequence, or substructure of items in the data set that frequently occur. An item set is a collection of items. In the location tag corresponding to the first location, WIFI information (i.e. a list of wireless network names) uploaded by a terminal is an item set. Frequent item sets refer to item sets (minimum support threshold) that have a frequency of occurrence of a certain item set among all items that is greater than a preset frequency.
In summary, based on the first location tags uploaded by the plurality of terminals, the item set with the highest occurrence frequency can be determined by adopting the Apriori algorithm. Taking frequent item set mining for cell and WIFI information as an example, the item set meeting the conditions, namely the item set with the support degree larger than the minimum support degree threshold value, is mined through the minimum support degree threshold value. Specifically, the support degree of each item set is determined through data mining, whether the item set meets the fingerprint requirement of the first position is judged, and if the support degree of the item set is larger than the minimum support degree threshold value, the item set can be used as the fingerprint feature of the first position.
For example, the calculation manner of the support degree of each item set may refer to the following formula 1:
support (x→y) = (count (X, Y))/N (equation 1)
Wherein X represents the ID of the base station obtained by the CELL signal, namely CELL ID; y represents a wireless network signal address obtained by the WIFI signal, and refers to a site address (Basic Service Set Identifier, BSSID) included in the WIFI list, namely the WIFI BSSID; support (x→y) represents the support of item sets from the base station to a specific area, N represents the total number of item sets, and count (X, Y) represents item set < cell, wifi >.
In other aspects, if in the data mining stage, the item set including < cell, WIFI > is not mined, that is, no WIFI information exists in the location tag reported by the terminal. In this case, if the location of the terminal is desired to be calculated, the CELL location of the terminal may be performed according to the CELL ID reported by the terminal. It is to be appreciated that the positioning accuracy is low when CELL IDs are matched. Because the communication range of a base station is large, it is impossible to accurately locate a specific area where a terminal is located.
In some embodiments, the cloud server data mining process described above may be illustrated using a flowchart as shown in fig. 3A. As shown in fig. 3A, the cloud server data mining process includes:
Step 301: and the cloud server adopts a preset data processing algorithm, and calculates fingerprint characteristics of the first position according to the position label uploaded by the terminal.
The cloud server may calculate the fingerprint feature of the location tag according to the above method after receiving the location tag uploaded by the terminal.
Step 302: judging whether the fingerprint feature exists in a fingerprint library of the first position or not; if the fingerprint feature already exists in the fingerprint library of the first location, then step 303 is performed; otherwise, step 304 is performed.
The fingerprint library of the first location includes a plurality of fingerprint features that enable the cloud server to uniquely determine the features of the first location. In addition, fingerprint features in the fingerprint library are generated according to the position labels from a plurality of terminals, the currently calculated fingerprint features may be stored in the fingerprint library, and in order to avoid repetition of the fingerprint features, matching of the first position is affected, and comparison is needed every time one fingerprint feature is added.
Step 303: no fingerprint library information is added.
If the fingerprint features which are the same as the current fingerprint features exist in the fingerprint library, the fingerprint features do not need to be added in the fingerprint library.
In another implementation, if the data is mined to obtain the fingerprint feature existing in the fingerprint library, the fingerprint feature existing in the fingerprint library may be marked to increase the association between the fingerprint feature and the first location.
Step 304: and saving the item set obtained by data mining to a fingerprint library at the first position.
It should be understood that when the mobile phone can interact with the cloud server, the mobile phone can determine a first action corresponding to the first location according to the fingerprint database information of the first location. In addition, the cloud server can receive the position tags of the first positions transmitted by the plurality of terminals, and then the cloud server can update the fingerprint features in the fingerprint library according to the position tags, so that the fingerprint library provided by the cloud server for the mobile phone can accurately provide the first actions related to the first positions for the mobile phone.
In some implementations, the cloud server, when updating the fingerprint library in real time, further includes: the fingerprint information of "aging" is deleted. Wherein the fingerprint information of "aging" indicates fingerprint information that has not been invoked for a preset period of time. For example, when the current location of the mobile phone is matched according to the fingerprint information in the fingerprint database, if the fingerprint information is not used within 45 days, the fingerprint information is called "aged fingerprint information", and the cloud server may delete the "aged" fingerprint information.
Specifically, the process of deleting fingerprint information is shown in FIG. 3B, and the method includes steps 3-1 to 3-4. In this flowchart, taking deleting a fingerprint feature as an example, the cloud server may perform the above-mentioned process for each fingerprint feature in the fingerprint library.
Step 3-1: the cloud server generates a timestamp of the fingerprint feature to determine a time interval in which the fingerprint feature is not used.
The earlier the time stamp that a fingerprint is generated, the higher the probability that the fingerprint will be "aged". For example, the cloud server may determine the order of determination of the fingerprint features from the fingerprint feature process for which the generation time stamp is earliest.
Step 3-2: and judging whether the time interval exceeds a preset time interval or not. If yes, executing the step 3-3; otherwise, step 3-4 is performed.
Step 3-3: deleting the fingerprint feature.
Step 3-4: the fingerprint feature is not processed.
Secondly, in the embodiment of the application, the mobile phone is taken as a computing terminal, the area of the first position is computed, and the first action process is provided for the user as an example, so that the scene recognition method provided by the application is described. As shown in fig. 4, the handset implements a flow chart of the method, wherein the process may also be referred to as the computation phase of the scene recognition method.
As shown in fig. 4, the calculation phase includes steps 401-406.
Step 401: when the mobile phone determines the first position where the mobile phone is currently located, a fingerprint library of the first position from the cloud server is received, wherein the fingerprint library comprises a plurality of fingerprint features of the first position.
It will be appreciated that the first location is an area where the handset is in a predetermined area of the first location (e.g., a location in the first subway station near the subway gate), the handset is only required to perform the first action.
Exemplary, as shown in fig. 6, is a schematic diagram of a first subway station. As shown in fig. 6, when the user enters the first subway station with the mobile phone, the mobile phone detects that the current position is in the first area 50, and determines that the mobile phone is near the entrance subway gate of the first subway station. At this time, the mobile phone may display an interface including the "subway riding code", that is, the mobile phone performs the first action. For example, when the handset is in the first position 10 or in the second position 20, the handset need not display an interface that includes a "subway ride code". When the mobile phone detects that the mobile phone is located in the first subway station, calculation may be performed, specifically, matching calculation needs to be performed with a fingerprint database of the first subway station, so as to determine whether the mobile phone is currently located in the first area 50.
Step 402: the mobile phone obtains a position tag of the first position.
Wherein, the position label includes: WIFI, cell, GNSS and light intensity information.
Specifically, the corresponding position label is determined according to the authority acquired by the mobile phone. For example, the mobile phone acquires the authority to collect WIFI, cell, GNSS and light intensity information, and when the mobile phone is located at the first position, WIFI, cell, GNSS and light intensity information can be acquired in real time.
In some implementations, when the handset is in the first position, the handset gathers a location tag, where the location tag includes WIFI, cell, GNSS and light intensity information. And when the mobile phone detects that the information of any dimension in the position label is changed, updating the position label of the first position. For example, when the mobile phone detects that the mobile phone is currently located at the first position, the light intensity information collected by the mobile phone is the first light intensity, and the light intensity in the position tag of the first position is the first light intensity. After the preset time, the mobile phone is still located at the first position, at this time, the mobile phone detects that the light intensity information changes to the second illumination intensity, and then the light intensity information in the mobile phone updating position label is the second illumination intensity.
Step 403: and the mobile phone matches the position tag with the fingerprint features in the fingerprint library to obtain a matching degree, and the matching degree is used for representing the matching degree of the position tag and the fingerprint features.
In some implementations, if the location tag includes a set of terms < cell, wifi >, the degree of matching (i.e., the degree of matching) of the location tag to the fingerprint library features may be calculated using equation 2 below.
score= (COUNT (AU B))/(COUNT (B)) (equation 2)
The score represents the matching degree, A represents the item set < cell, wifi > acquired by the mobile phone of the user, and B represents the item set < cell, wifi > in the fingerprint feature. The COUNT (AU B) represents the same number of items in the item set < cell, wifi > corresponding to the A and the item set < cell, wifi > corresponding to the B; COUNT (B) represents the number of items in the item set < cell, wifi > corresponding to B.
It can be understood that if the location tag collected by the mobile phone does not include the item set of < CELL, wifi >, the specific location of the mobile phone at the first location can be located by adopting the CELL ID matching mode. It is to be understood that, since there is no WIFI signal to enhance positioning, positioning accuracy at this time is low.
For example, taking the first subway station as the first position as an example, after the mobile phone completes the first action, the mobile phone collects the position tag, and uploads the position tag to the cloud server. The cloud server can be provided with a plurality of < cell, wifi > fingerprints in a fingerprint library of the first subway station according to the position tags uploaded by the mobile phones. If, the information acquired by the cell for the position tag indicates that the cell where the mobile phone is located and the cell_a, the signal strength of the cell_a is E1, and the WIFI signal name acquired by the WIFI information includes: wifi_1, wifi_2, and wifi_3. The cloud server can generate three fingerprint features of the first subway station based on the position tag: < cell_a, wifi_1>, < cell_a, wifi_2>, and < cell_a, wifi_3>.
After the mobile phone acquires the fingerprint features of the first subway station, the mobile phone can match all the fingerprint features with the position tags currently acquired by the mobile phone. The fact that two fingerprint features are consistent with the currently acquired position tags or three fingerprint features are consistent with the currently acquired position tags can indicate that the mobile phone is currently located near a subway gate, and the mobile phone can execute a first action, namely a display interface comprising a subway riding code is displayed.
Step 404: judging whether the matching degree is larger than a preset matching threshold value. If the matching degree is greater than the preset matching threshold, step 405 is executed; if the matching degree is less than or equal to the preset matching threshold, step 406 is performed.
The matching degree is larger than a preset matching threshold value, and the specific position of the mobile phone in the first position is a preset area indicated by the fingerprint library.
If the matching degree is smaller than or equal to a preset matching threshold value, the current specific position of the mobile phone is not consistent with a preset area indicated by the fingerprint library. When the user moves the mobile phone in the first position, the specific position of the mobile phone in the first position is changed, so that the position label acquired by the mobile phone is changed. At this time, the mobile phone can also acquire the position tag in real time, and if the information in the position tag is updated, the position tag is updated. It will be appreciated that when the mobile phone location tag is updated, the mobile phone may execute step 403 again.
It should be noted that, in one possible implementation, the mobile phone may also perform step 405 when the matching degree is greater than or equal to the preset matching threshold, and perform step 406 when the matching degree is less than the preset matching threshold.
Step 405: and determining a preset area of the mobile phone in the first position, and executing a first action by the mobile phone.
It can be appreciated that when the mobile phone is determined to be in the preset area in the first position, the probability of performing the first action when the mobile phone is in the preset area in the first position is higher, and the mobile phone performs the first action. So as to provide a good use experience for the user.
Step 406: and does not respond.
Wherein the handset is determined to be in the first position, but the handset is not in the preset area. The handset does not have to perform the first action, in which case the handset does not have to respond (e.g., perform the first action) depending on the current location, and the handset does not respond.
Taking the scenario shown in fig. 6 as an example, when the mobile phone is determined to be in the first subway station and the mobile phone is in the first area 50, the mobile phone displays an interface including "subway riding code" after recognizing the operation of the user. Based on the fact that the mobile phone recognizes that the code scanning of the subway riding code is successful, the user is indicated to carry the mobile phone through the subway gate. At this time, the mobile phone reports the currently acquired position tag and the behavior of the mobile phone executing the first action (i.e. displaying the interface including the subway riding code) to the cloud server. The cloud server can generate a fingerprint library of the first position according to data reported by the mobile phone. The fingerprint library includes fingerprint features generated from location tags of the first location.
In addition, when the mobile phone is in the first position, the mobile phone can acquire a fingerprint library of the first position from the cloud server. Meanwhile, the mobile phone acquires a current position tag, and matches the position tag with fingerprint features in a fingerprint library to determine the current first position area of the mobile phone, so as to execute a first action. For example, when the mobile phone is at the first position 10 (or the second position 20), the matching degree between the position tag obtained by the mobile phone and the fingerprint feature in the fingerprint database is low, which indicates that the position tag is inconsistent with the fingerprint database, and the mobile phone does not respond. When the mobile phone is in the first area 50, the mobile phone updates the position tag, and matches the fingerprint feature in the fingerprint database, and the matching degree of the position tag and the fingerprint feature in the fingerprint database is higher than the preset matching threshold, which indicates that the position tag matches with the fingerprint database, and the mobile phone executes the first action.
It should be understood that when the mobile phone is at the first position, the mobile phone may detect the position tag in the first area 50 and report data to the cloud server; the mobile phone can also detect the position tag in the second area 51 and report the data to the cloud server.
The first location may also be a beverage store (e.g., a cafe), a bus stop, a restaurant, a residential community, an office location, etc. In the above scenario, the first position is exemplified by the first subway station, which belongs to a public place, and when the user is at a position where the first subway station approaches the subway gate, the user is highly likely to want to enter or exit. The mobile phone determines that the first subway station is located at the position close to the gate, and then executes the first action, so that user operation can be simplified, and convenience is provided for users. For specific public places, such as drink shops, restaurants, residential communities and office places, the mobile phones in the places store specific membership cards or access cards, and only have the first actions corresponding to the places. If the first position is a beverage shop, when the mobile phone is close to a cash register of the beverage shop, the mobile phone determines that the user has a consumption requirement, and the first action is that the mobile phone displays an electronic membership card of the beverage shop; the mobile phone is provided with an electronic access control card of a residential district or an office place, the first position is the residential district or the office place, when the mobile phone approaches to an access control machine of the residential district or the office place, the mobile phone determines that a user has the requirement of accessing the residential district or the office place through the access control machine of the residential district, and the first action is that the mobile phone displays the electronic access control card of the residential district or the office place.
It can be understood that, if there are no membership cards or electronic access cards in these places, the mobile phone does not need to determine the correspondence between the first location and the first action, i.e. the mobile phone will not execute the first action in these places.
For example, in a mobile phone with NFC function, a corporate access card, a community access card, and a bus card are stored in advance. When the mobile phone is located in a company, the mobile phone obtains a position tag corresponding to the position of the company, and obtains a fingerprint library corresponding to the position of the company from the cloud server. The mobile phone detects that the current position tag is matched with the fingerprint library, and determines that the mobile phone is currently positioned near the company card swiping machine, and then the mobile phone displays the company access control card (electronic NFC card).
For another example, when the mobile phone is located at the first bus station, the mobile phone obtains a position tag corresponding to the first bus station, and meanwhile, obtains a fingerprint library corresponding to the first bus station from the cloud server. The mobile phone detects that the position tag is matched with the fingerprint database, and when the first bus reaches the first station, the mobile phone updates the position tag. At this time, if the mobile phone determines that the position tag is matched with the fingerprint library, the mobile phone displays a bus card (electronic NFC card).
It is to be understood that if the first bus includes a WiFi signal, when the first bus reaches the first bus station, the WiFi signal in the mobile phone changes to update the location tag, so that the mobile phone matches the location tag with the fingerprint library, and the mobile phone recognizes that the first bus is a bus that is commonly taken. The cell phone displays the bus card so that the user can take the first bus. That is, when the mobile phone reports data, a corresponding fingerprint library can be created for the mobile phone user, and when the mobile phone is at the first bus station, the action of sweeping the code by the bus is identified. So as to perform matching calculation when the mobile phone is at the first bus station, and further determine the bus about to be taken by the user.
In other implementations, for example, the first location is a drink shop (or other consumer location), and the first action the handset performs when it is determined to be at the first location is to display a drink shop membership card. Thus, when the mobile phone detects that it is in the first location (i.e., the drink shop), the mobile phone can display the preferential information of the drink shop.
It can be appreciated that in the method provided by the application, the electronic device does not need to add additional auxiliary devices, and the electronic device can report data to the cloud server, so that the cloud server builds a fingerprint library for the first position according to the data. The fingerprint library of the cloud server can be obtained by other terminals, so that when the terminal receives the fingerprint library, the preset area of the other terminals at the first position can be determined in a matching mode, and the mobile phone can execute the first action.
It should be noted that, when the electronic device reports the data, the electronic device may determine the confidence level of the first position and the first action according to the sensor, the display screen, and the like. The electronic device may associate the first location with the first action and report this association to the cloud server. Secondly, when the cloud server generates the fingerprint database of the first position, a data mining mode is adopted to perform data mining processing on a large amount of data according to the data uploaded by the terminals, so that the influence of noisy data on fingerprint features is reduced, and the fingerprint features of the first position are obtained.
In one implementation mode, the behavior that a user scans a subway riding code and passes through a subway gate is perceived through a mobile phone, the mobile phone can automatically acquire wifi/cell/light intensity/satellite information of the position, and upload a cloud side server after marking a true value label, and the cloud side learns and updates a position fingerprint library. Further, subsequently, the mobile phone can download the latest position fingerprint library from the cloud side, acquire the position wifi/cell/light intensity/satellite information of the user to match with the fingerprint library at regular time, and can identify that the user is near the subway gate if the matching is successful, so that the mobile phone can automatically pop up the subway riding code for the user to conveniently and rapidly brush the code to pass the gate.
For example, when determining wifi/cell/light intensity/satellite information of the current position, the mobile phone may calculate whether the current position is a position close to the gate of the subway by using a machine learning model. The method flow is shown in fig. 5, and the method comprises steps 501-505.
Step 501: the mobile phone is positioned at a first subway station, and wifi/cell/light intensity/satellite information of the first subway station is acquired.
Step 502: and calculating the distance between the mobile phone and the subway gate in the first subway station based on a preset machine learning model.
The mobile phone takes the wifi/cell/light intensity/satellite information acquired in real time and the wifi/cell/light intensity/satellite information corresponding to the preset first subway station as inputs of a machine learning model, and an output result of the machine learning model is obtained. The output result is used for indicating the distance value between the current position of the mobile phone and the subway gate.
Step 503: and judging whether the distance value is larger than a preset distance value. If yes, go to step 504; otherwise, step 505 is performed.
Step 504: and determining that the mobile phone is near a subway gate of the first subway station, and executing a first action.
Step 505: and does not respond.
The method provided by the embodiment of the application is described by taking the electronic device as a mobile phone, and the method can also be adopted when the electronic device is other devices. And will not be described in detail herein.
It may be understood that, in order to implement the above-mentioned functions, the electronic device provided in the embodiments of the present application includes corresponding hardware structures and/or software modules that perform each function. Those of skill in the art will readily appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
The embodiment of the application may divide the functional modules of the electronic device according to the method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
Embodiments of the present application also provide a chip system, as shown in fig. 7, which includes at least one processor 601 and at least one interface circuit 602. The processor 601 and the interface circuit 602 may be interconnected by wires. For example, the interface circuit 602 may be used to receive signals from other devices (e.g., a memory of an electronic apparatus). For another example, the interface circuit 602 may be used to send signals to other devices (e.g., the processor 601). The interface circuit 602 may, for example, read instructions stored in a memory and send the instructions to the processor 601. The instructions, when executed by the processor 601, may cause the electronic device to perform the various steps of the embodiments described above. Of course, the chip system may also include other discrete devices, which are not specifically limited in this embodiment of the present application.
The embodiment of the application also provides a computer storage medium, which comprises computer instructions, when the computer instructions run on the electronic device, the electronic device is caused to execute the functions or steps executed by the mobile phone in the embodiment of the method.
The present application also provides a computer program product, which when run on a computer, causes the computer to perform the functions or steps performed by the mobile phone in the above-mentioned method embodiments.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (13)

1. A scene recognition method, comprising:
detecting that the electronic equipment enters a first area;
after detecting that the electronic equipment enters a first area, detecting a position tag corresponding to a first position when the electronic equipment is located at the first position in the first area, wherein the position tag represents environment information acquired by the electronic equipment in a current place, and the first position is located in the first area;
comparing the position tag with position information corresponding to a preset position to obtain a comparison result, wherein the preset position information represents environment information of the electronic equipment in a preset scene of the first position, and the preset position is in the first area;
if the comparison result shows that the position label is consistent with the position information corresponding to the preset position, triggering the electronic equipment to automatically execute a first action corresponding to the first position, wherein the first action comprises running a first application by the electronic equipment and displaying a first interface, and the first interface comprises a two-dimensional code;
If the comparison result indicates that the position label is not consistent with the position information corresponding to the preset position, the electronic equipment does not automatically execute the first action corresponding to the first position;
wherein the first action has a preset logic relationship with the first position;
the location tag includes: the address of all first WIFI sites collected by the electronic equipment and the signal intensity of each first WIFI site, and at least one of the first cell identifier obtained by the electronic equipment, the signal intensity of each first cell identifier and the first illumination intensity collected by the electronic equipment;
the preset position information includes: the addresses of all second WIFI sites acquired by the electronic equipment and the signal intensity of each second WIFI site, and at least one of the second cell identifier acquired by the electronic equipment and the signal intensity of each second cell identifier and the second illumination intensity acquired by the electronic equipment;
the position information corresponding to the preset position is generated according to a position label acquired by the electronic equipment under the condition that the electronic equipment responds to the user operation to execute a first event after entering the first area;
The electronic device executing a first event in response to a user operation, comprising:
the electronic equipment receives a first operation input by a user, wherein the first operation is used for indicating the electronic equipment to run the first application;
and responding to the first operation, running the first application by the electronic equipment, and displaying the first interface, wherein the first interface comprises a two-dimensional code.
2. The method of claim 1, wherein the electronic device performs the first event in response to a user operation, further comprising:
after the electronic equipment displays the first interface, receiving a holding and turning operation of a user, wherein the holding and turning operation enables a display screen of the electronic equipment to turn to face a code brushing port; or alternatively, the first and second heat exchangers may be,
and the electronic equipment displays a second interface after displaying the first interface, and the second interface indicates that the code scanning is successful.
3. The method according to claim 1 or 2, wherein the location information corresponding to the preset location is generated according to a location tag acquired by the electronic device in response to a user operation performed on a first event after the electronic device enters the first area, comprising:
After the electronic equipment enters a first area, obtaining the confidence coefficient of executing a first event by the electronic equipment, wherein the confidence coefficient represents the execution degree of triggering and executing the first event by user operation, the first event is associated with the first action, and the first application is operated when the electronic equipment executes the first event;
when the confidence level characterizes that the first event is executed or is executed successfully, acquiring a position tag currently acquired by the electronic equipment, wherein the position tag characterizes environment information acquired by the electronic equipment in a current place;
the electronic equipment generates the position information corresponding to the preset position according to the position label, or the electronic equipment transmits the position label to a cloud server, so that the cloud server generates the position information corresponding to the preset position according to the position label.
4. The method according to claim 1 or 2, wherein the location information corresponding to the preset location is generated according to a location tag acquired by the electronic device in response to a user operation performed on a first event after the electronic device enters the first area, comprising:
After the electronic equipment is detected to enter a first area, obtaining the confidence coefficient of executing a first event by the electronic equipment, wherein the confidence coefficient represents the execution degree of triggering and executing the first event by user operation, the first event is associated with the first action, and the first application is operated when the electronic equipment executes the first event;
when the confidence represents that the execution of the first event is finished, acquiring an execution result of the first event, and checking the execution result to obtain a checking result;
if the verification result represents that the first event is successfully executed, acquiring a position tag currently acquired by the electronic equipment, wherein the position tag represents environment information acquired by the electronic equipment in a current place;
the electronic equipment generates the position information corresponding to the preset position according to the position label, or the electronic equipment transmits the position label to a cloud server, so that the cloud server generates the position information corresponding to the preset position according to the position label.
5. The method according to claim 1 or 2, wherein the comparing the position tag with position information corresponding to a preset position, and before obtaining the comparison result, the method further comprises:
The electronic equipment acquires position information corresponding to the electronic equipment at the preset position.
6. The method of claim 4, wherein when the confidence characterizes the end of execution of the first event, comprising:
if the confidence coefficient is determined to be larger than a first preset threshold value and smaller than or equal to a second preset threshold value, determining that the electronic equipment starts executing the first event; and if the confidence coefficient is determined to be larger than the second preset threshold value, determining that the electronic equipment executes and completes the first event, wherein the first preset threshold value is smaller than the second preset threshold value.
7. The method of claim 4, wherein the execution result comprises a current display interface of the electronic device.
8. The method according to claim 1 or 2, wherein if the location tag comprises: the addresses of all first WIFI stations acquired by the electronic equipment, the signal intensity of each first WIFI station, the first cell identification acquired by the electronic equipment and the signal intensity of the first cell identification;
comparing the position label with the position information corresponding to the preset position to obtain a comparison result, wherein the comparing comprises the following steps:
Generating a first set of items including the first WIFI site and the first cell identity based on the location tag;
generating a second item set comprising the second WIFI site and the second cell identifier based on the position information corresponding to the preset position;
and calculating a comparison result of the position label and the position information corresponding to the preset position according to the first item set and the second item set.
9. The method according to claim 1 or 2, wherein if the location tag comprises: the method comprises the steps that a first cell identifier and signal strength of the first cell identifier are acquired by electronic equipment;
comparing the position label with the position information corresponding to the preset position to obtain a comparison result, wherein the comparing comprises the following steps:
comparing the first cell identifier with the second cell identifier to obtain a comparison result; if the first cell identifier is the same as the second cell identifier, the comparison result indicates that the position tag is consistent with the preset position information; and if the first cell identifier is different from the second cell identifier, the comparison result indicates that the position label is not consistent with the preset position information.
10. The method according to claim 1 or 2, wherein the two-dimensional code is a subway train code or a health condition related two-dimensional code or a member card related two-dimensional code or a coupon related two-dimensional code.
11. The method of claim 3, wherein the obtaining the confidence that the electronic device performed the first event comprises:
the electronic equipment runs a first application in response to the first operation, and acquires the first interface after the first interface is displayed;
the electronic equipment identifies the first interface to obtain a first identification result;
the electronic equipment determines the confidence coefficient of the first event according to the first identification result;
and each interval is a first preset duration, the electronic equipment acquires a current display interface, and the confidence level of the first event is determined according to the current display interface.
12. The method of claim 3, wherein the obtaining the confidence that the electronic device performed the first event comprises:
after the electronic equipment receives the holding and turning operation of the user, the electronic equipment acquires the current display interface through a second preset time length;
The electronic equipment identifies the current display interface to obtain a second identification result;
and the electronic equipment determines the confidence coefficient of the first event according to the second identification result.
13. An electronic device, comprising:
one or more processors; a memory in which codes are stored; the touch screen is used for detecting touch operation and displaying an interface;
the code, when executed by the one or more processors, causes the electronic device to perform the scene recognition method of any of claims 1-12.
CN202280004912.9A 2021-04-23 2022-02-08 Scene recognition method and electronic equipment Active CN115668989B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
CN2021104445663 2021-04-23
CN202110444566.3A CN113115211A (en) 2021-04-23 2021-04-23 Multilayer fence construction method, cloud server and first terminal device
CN2021108920613 2021-08-04
CN202110892061 2021-08-04
PCT/CN2022/075559 WO2022222576A1 (en) 2021-04-23 2022-02-08 Scenario recognition method and electronic device

Publications (2)

Publication Number Publication Date
CN115668989A CN115668989A (en) 2023-01-31
CN115668989B true CN115668989B (en) 2024-04-02

Family

ID=83723569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280004912.9A Active CN115668989B (en) 2021-04-23 2022-02-08 Scene recognition method and electronic equipment

Country Status (2)

Country Link
CN (1) CN115668989B (en)
WO (1) WO2022222576A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115859158B (en) * 2023-02-16 2023-07-07 荣耀终端有限公司 Scene recognition method, system and terminal equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109309911A (en) * 2018-11-23 2019-02-05 深圳市万通顺达科技股份有限公司 Two dimensional code call-out method based on bluetooth, device, payment system
CN110278230A (en) * 2018-03-16 2019-09-24 阿里巴巴集团控股有限公司 Data processing method, client, server and storage medium
WO2020000697A1 (en) * 2018-06-29 2020-01-02 平安科技(深圳)有限公司 Behavior recognition method and apparatus, computer device, and storage medium
WO2020011211A1 (en) * 2018-07-13 2020-01-16 奇酷互联网络科技(深圳)有限公司 Mobile terminal and method and device for automatically logging into application platform
WO2021072775A1 (en) * 2019-10-18 2021-04-22 深圳市欢太科技有限公司 Method and apparatus for terminal payment, terminal device and computer-readable storage medium
CN113115211A (en) * 2021-04-23 2021-07-13 荣耀终端有限公司 Multilayer fence construction method, cloud server and first terminal device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7556193B1 (en) * 2008-06-30 2009-07-07 International Business Machines Corporation Method and apparatus for affinity card consolidation
KR101919776B1 (en) * 2011-12-19 2018-11-19 엘지전자 주식회사 Mobile terminal and method for controlling thereof
CN103365543B (en) * 2013-07-08 2016-08-17 宇龙计算机通信科技(深圳)有限公司 Terminal and two-dimensional code display method
WO2019153211A1 (en) * 2018-02-08 2019-08-15 华为技术有限公司 Application switching method, and terminal
CN109214810A (en) * 2018-07-25 2019-01-15 努比亚技术有限公司 A kind of two-dimensional code display method, mobile terminal and computer readable storage medium
CN109711226A (en) * 2018-12-25 2019-05-03 努比亚技术有限公司 Two-dimensional code identification method, device, mobile terminal and readable storage medium storing program for executing
WO2020148658A2 (en) * 2019-01-18 2020-07-23 Rathod Yogesh Methods and systems for displaying on map current or nearest and nearby or searched and selected location(s), geo-fence(s), place(s) and user(s) and identifying associated payments and account information for enabling to make and receive payments
CN114153343B (en) * 2021-10-22 2022-09-16 荣耀终端有限公司 Health code display method and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110278230A (en) * 2018-03-16 2019-09-24 阿里巴巴集团控股有限公司 Data processing method, client, server and storage medium
WO2020000697A1 (en) * 2018-06-29 2020-01-02 平安科技(深圳)有限公司 Behavior recognition method and apparatus, computer device, and storage medium
WO2020011211A1 (en) * 2018-07-13 2020-01-16 奇酷互联网络科技(深圳)有限公司 Mobile terminal and method and device for automatically logging into application platform
CN109309911A (en) * 2018-11-23 2019-02-05 深圳市万通顺达科技股份有限公司 Two dimensional code call-out method based on bluetooth, device, payment system
WO2021072775A1 (en) * 2019-10-18 2021-04-22 深圳市欢太科技有限公司 Method and apparatus for terminal payment, terminal device and computer-readable storage medium
CN113115211A (en) * 2021-04-23 2021-07-13 荣耀终端有限公司 Multilayer fence construction method, cloud server and first terminal device

Also Published As

Publication number Publication date
WO2022222576A1 (en) 2022-10-27
CN115668989A (en) 2023-01-31

Similar Documents

Publication Publication Date Title
US7571124B2 (en) Location based services virtual bookmarking
US10445778B2 (en) Short distance user recognition system, and method for providing information using same
US20150081583A1 (en) Confirming delivery location using radio fingerprinting
CN105510908B (en) Positioning method, device and system based on wireless communication
US20100198725A1 (en) Method for securing transactions, transaction device, bank server, mobile terminal, and corresponding computer programs
CN109656973B (en) Target object association analysis method and device
KR101453317B1 (en) Method and system for service based on location client using WiFi
US20140302819A1 (en) Techniques for selecting a proximity card of a mobile device for access
US20230098616A1 (en) Method for Invoking NFC Application, Electronic Device, and NFC Apparatus
CN111241856A (en) Method for selecting NFC analog card and watch
CN111464692A (en) Near field communication card determination method and device, storage medium and electronic equipment
WO2014048268A1 (en) System and method of monitoring for out-of-bounds mobile pos terminals
US20220058656A1 (en) Identity recognition method and apparatus based on dynamic rasterization management, and server
CN110945552B (en) Product sales reporting method, payment method and terminal equipment
CN203825670U (en) Fingerprint-card punching-photographing integrated attendance apparatus
CN104580325A (en) User pairing method and device, as well as data exchange method, device and system
CN115668989B (en) Scene recognition method and electronic equipment
CN112468975A (en) Management method, device, medium and electronic equipment of analog card
CN110910524A (en) Automatic sign-in system, method, device, electronic equipment and computer storage medium
CN102857624B (en) Notification method, notification device and source electronic equipment
CN112492518B (en) Card determination method, device, electronic equipment and storage medium
TWM599429U (en) Geographic Information System Combining Consumption Heat
CN111524041A (en) Ordering processing method, device and system based on dynamic content service
CN110942542A (en) Access control machine and control method and control equipment thereof
CN210721606U (en) Identity verification device and entrance guard machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant