CN117596546A - Scene determination method and device, terminal equipment and storage medium - Google Patents

Scene determination method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN117596546A
CN117596546A CN202311467697.9A CN202311467697A CN117596546A CN 117596546 A CN117596546 A CN 117596546A CN 202311467697 A CN202311467697 A CN 202311467697A CN 117596546 A CN117596546 A CN 117596546A
Authority
CN
China
Prior art keywords
fingerprint information
scene
preset
target
subspace
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311467697.9A
Other languages
Chinese (zh)
Inventor
袁正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202311467697.9A priority Critical patent/CN117596546A/en
Publication of CN117596546A publication Critical patent/CN117596546A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/021Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/50Service provisioning or reconfiguring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • H04W64/006Locating users or terminals or network equipment for network management purposes, e.g. mobility management with additional information processing, e.g. for direction or speed determination
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Telephone Function (AREA)

Abstract

The invention discloses a scene determination method and device, terminal equipment and a storage medium; the method comprises the following steps: acquiring current fingerprint information, wherein the current fingerprint information is acquired according to a wireless network at a current position; and under the condition that the current fingerprint information is matched with preset fingerprint information corresponding to a preset device scene, determining that the target terminal device is in the preset device scene, wherein the preset device scene is a scene leaving a target area. The method can judge that the user leaves the target area according to the fingerprint information of the current position of the terminal equipment of the user, can provide a convenient, quick and noninductive scene leaving sensing effect for the user, and provides services for subsequent recommendation applications.

Description

Scene determination method and device, terminal equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of wireless positioning, and relates to a scene determining method and device, terminal equipment and a storage medium.
Background
The technical scheme of judging that the user is in a scene leaving the target area, such as a scene leaving home in the prior art is that when the WiFi in the home connected with the mobile phone of the user is automatically disconnected, the user is judged to be in the scene leaving home.
The problem in the prior art is that the speed of judging that the user leaves the target area scene is low, so that the judged user leaves the target area scene and cannot provide services for the follow-up recommendation application.
Disclosure of Invention
In view of this, the scene determining method, the device, the terminal equipment and the storage medium provided in the embodiments of the present application can determine that the user leaves the target area according to the fingerprint information of the current position of the user, so as to provide a convenient, fast and noninductive scene leaving sensing effect for the user, and provide services for the subsequent recommended application.
In a first aspect, a scene determination method provided in an embodiment of the present application is applied to a terminal device, and includes:
acquiring current fingerprint information, wherein the current fingerprint information is acquired according to a wireless network at a current position;
and under the condition that the current fingerprint information is matched with preset fingerprint information corresponding to a preset device scene, determining that the target terminal device is in the preset device scene, wherein the preset device scene is a scene leaving a target area.
In a second aspect, a scene determining device provided in an embodiment of the present application is applied to a terminal device, and includes:
the fingerprint acquisition module is used for acquiring current fingerprint information, wherein the current fingerprint information is acquired according to a wireless network at a current position;
The scene determining module is used for determining that the target terminal equipment is in the preset equipment scene under the condition that the current fingerprint information is matched with preset fingerprint information corresponding to the preset equipment scene, wherein the preset equipment scene is a scene leaving a target area.
In a third aspect, a terminal device provided in an embodiment of the present application includes a memory and a processor, where the memory stores a computer program that can be run on the processor, and the processor implements the steps of the scene determining method provided in the first aspect of the embodiment of the present application when the processor executes the program.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored thereon a computer program that, when executed by a processor, implements the steps of the scene determination method provided in the first aspect of embodiments of the present application.
The scene determining method, the device, the terminal equipment and the computer readable storage medium provided by the embodiment of the application can judge that the user leaves the target area according to the fingerprint information of the current position of the user, can provide a convenient, quick and noninductive scene leaving sensing effect for the user and provide services for subsequent recommendation applications, thereby solving the technical problems in the background technology.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and, together with the description, serve to explain the technical aspects of the application.
Fig. 1 is a schematic diagram of a scene determining system according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a scene determining method according to an embodiment of the present application;
fig. 3 is a flow chart of another scenario determination method according to an embodiment of the present application;
fig. 4 is a flowchart of a method for obtaining preset fingerprint information according to an embodiment of the present application;
fig. 5 is a flowchart of another method for obtaining preset fingerprint information according to an embodiment of the present application;
fig. 6 is a flowchart of another method for obtaining preset fingerprint information according to an embodiment of the present application;
fig. 7 is a schematic diagram of an away-home scenario provided in an embodiment of the present application;
fig. 8 is a schematic flow chart of a subspace clustering method of an away-from-home scene according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a scene determining device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
For the purposes, technical solutions and advantages of the embodiments of the present application to be more apparent, the specific technical solutions of the present application will be described in further detail below with reference to the accompanying drawings in the embodiments of the present application. The following examples are illustrative of the present application, but are not intended to limit the scope of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
It should be noted that the term "first/second/third" in reference to the embodiments of the present application is used to distinguish similar or different objects, and does not represent a specific ordering of the objects, it being understood that the "first/second/third" may be interchanged with a specific order or sequence, as permitted, to enable the embodiments of the present application described herein to be implemented in an order other than that illustrated or described herein.
The technical scheme of judging that the user is in a scene leaving the target area, such as a scene leaving home in the prior art is that when the WiFi in the home connected with the mobile phone of the user is automatically disconnected, the user is judged to be in the scene leaving home.
Fig. 1 is a schematic diagram of a scene determining system according to an embodiment of the present application. As shown in fig. 1, the scene determination system includes a wireless access point 20 and a terminal device 10 disposed within a target area. When the terminal device 10 disconnects the wireless connection with the wireless access point 20, i.e., wiFi, i.e., leaves the network range of the wireless access point 20 (shown by a dotted line in the figure), it is determined that the user is in a scene of leaving the target area. Wherein both the wireless access point 20 and the terminal device 10 may be one or more. The wireless access point may be a wireless device such as a router.
The problem in the prior art is that the speed of judging that the user leaves the target area scene is low, so that the judged user leaves the target area scene and cannot provide services for the follow-up recommendation application.
In view of this, the embodiment of the application provides a scene determination method by acquiring current fingerprint information; and under the condition that the current fingerprint information is matched with preset fingerprint information corresponding to a preset device scene, determining that the target terminal device is in the preset device scene, wherein the preset device scene is a scene leaving a target area. The method can judge that the user leaves the target area according to the fingerprint information of the current position of the user, can provide a convenient, quick and noninductive scene-leaving sensing effect for the user, and can provide services for subsequent recommendation applications.
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Fig. 2 is a flow chart of a scene determining method according to an embodiment of the present application. The method may be applied to a terminal device, which may be various types of devices having information processing capabilities in the course of implementation. For example, the terminal device may include a personal computer, a notebook computer, a palm computer, a server, or the like; the terminal device may also be a mobile terminal, for example, the mobile terminal may include a mobile phone, a vehicle-mounted computer, a tablet computer, a projector, or the like. As shown in fig. 2, the method may include the following steps 101 to 102:
step 101: and acquiring current fingerprint information, wherein the current fingerprint information is acquired according to a wireless network of the current position.
It should be noted that, the current fingerprint information refers to a location fingerprint corresponding to the current location, and is positioning information obtained according to a wireless network of the current location. The method for acquiring the current fingerprint information is not limited, and the information content included in the current fingerprint information is not limited.
The current fingerprint information may be obtained at regular time according to a preset time interval, or may be obtained according to a triggering method set by a user, etc., which are not limited in the embodiment of the present application.
It will be appreciated that a "location fingerprint" is a fingerprint that relates locations in the actual environment to some sort of "fingerprint", one location corresponding to each unique fingerprint. This fingerprint may be single-dimensional or multi-dimensional, and may be of various types, with any "location-specific" (helpful in distinguishing locations) feature being used as a location fingerprint. Such as the multipath structure of the communication signal at a location, whether an access point or base station can be detected at a location, the RSS (received signal strength) of the signal from the base station can be detected at a location, the round trip time or delay of the signal when communicating at a location, which can be used as a location fingerprint, or a combination thereof.
Step 102: and under the condition that the current fingerprint information is matched with preset fingerprint information corresponding to a preset device scene, determining that the target terminal device is in the preset device scene, wherein the preset device scene is a scene leaving a target area.
It should be noted that, the target terminal device stores preset fingerprint information corresponding to a preset device scene, after the current fingerprint information is obtained, the current fingerprint information is matched with the preset fingerprint information, if the current fingerprint information can be matched with the preset fingerprint information, it can be determined that the target terminal device is in the preset device scene, that is, the target terminal device is in a scene leaving the target area.
According to the scene determination method provided by the embodiment of the application, when the user leaves from a target area, such as a home, the current fingerprint information is obtained through the terminal equipment of the user and is compared with the preset fingerprint information, and under the condition of matching, the scene of the user in the preset equipment, namely the scene leaving the target area, can be determined. The method and the device can provide the user with a convenient (without additional operation), quick (sensitive response) and no-sense (without being perceived by the user) leaving scene perception effect and provide services for subsequent recommendation.
In some embodiments, the preset fingerprint information may be scene fingerprint information, which may be fingerprint information corresponding to a location outside the target area.
In this embodiment of the present application, the preset fingerprint information includes scene fingerprint information, where the scene fingerprint information may be fingerprint information collected by any terminal device in an area away from the target area, where the area away from the target area is a preset distance, and the any terminal device is the target terminal device or other terminal devices.
It should be noted that, the preset fingerprint information may be fingerprint information collected by the target terminal device or other terminal devices in an exit area, where the exit area refers to an area with a preset distance from the target area, and the user (terminal device) in the exit area represents that the user has left the target area, and the obtained fingerprint information in the exit area also represents that the position information of the exit target area is obtained, and the visible scene fingerprint information can be used for accurately determining the scene of the exit target area.
In some embodiments, the preset fingerprint information may be fingerprint information of a scene subspace, the scene subspace being a wireless network subspace, the fingerprint information of the scene subspace may be indicative of fingerprint information of a wireless network region other than the target region.
In this embodiment of the present application, the preset fingerprint information includes fingerprint information of a scene subspace, where the scene subspace is generated by clustering fingerprint information collected by at least one terminal device in the departure area, and the fingerprint information of the scene subspace is matched with the scene fingerprint information.
It should be noted that, the scene subspace may be generated by clustering fingerprint information collected by at least one terminal device in the departure area, and the fingerprint information of the scene subspace is matched with the scene fingerprint information, that is, the scene subspace actually includes a position corresponding to the scene fingerprint information.
It can be understood that the area range corresponding to the fingerprint information of the scene subspace is larger than the area range corresponding to the fingerprint information of the scene, and the scene leaving the target area is determined by adopting the fingerprint information of the scene subspace, so that the judgment accuracy can be improved.
In some embodiments, before the current fingerprint information is acquired, the preset fingerprint information may be acquired according to a specific wireless network corresponding to the target area, so as to improve accuracy of the judgment.
Fig. 3 is a flow chart of another scenario determination method according to an embodiment of the present application. As shown in fig. 3, before the step 101 of acquiring the current fingerprint information, the method further includes:
step 201: and acquiring the preset fingerprint information by acquiring the fingerprint information of a target wireless network corresponding to the target area, wherein the target wireless network comprises a wireless fidelity network.
It should be noted that, by collecting fingerprint information of a target wireless network corresponding to a target area, the target wireless network includes a WiFi, and preset fingerprint information can be obtained. Because the range of the WiFi network can cover the target area, fingerprint information is generated through the WiFi network, and the judgment accuracy can be further improved.
In some embodiments, when the preset fingerprint information includes scene fingerprint information, the manner of acquiring the preset fingerprint information may include two, one acquired in a scene leaving the target area and the other acquired in a scene entering the target area. The following is a detailed description.
In one mode, preset fingerprint information is acquired in a scene leaving a target area.
Fig. 4 is a flowchart of a method for obtaining preset fingerprint information according to an embodiment of the present application. As shown in fig. 4, the preset fingerprint information is scene fingerprint information, and the acquiring the preset fingerprint information by acquiring the fingerprint information of the target wireless network corresponding to the target area may include:
step 301: and determining that any terminal equipment is switched from a connection state to a disconnection state with the target wireless network.
Wherein any terminal device can monitor its connection state with the target wireless network to determine whether to switch to the disconnected state. The method for monitoring the connection state/disconnection state of the target wireless network is numerous, and the method for determining whether any terminal device and the target wireless network are switched from the connection state to the disconnection state is not limited in the embodiment of the present application.
Step 302: acquiring fingerprint information acquired by any terminal equipment in a first target period before the terminal equipment is disconnected with the target wireless network, and acquiring scene fingerprint information, wherein the ending time of the first target period is the time when the terminal equipment is disconnected with the terminal equipment, and the terminal equipment is the target terminal equipment or other terminal equipment.
After determining that any terminal device is disconnected from the target wireless network, fingerprint information acquired by any terminal device in a first target period of time before the disconnection time can be acquired and used as scene fingerprint information.
For example, if it is determined that the time for disconnecting any terminal device from the target wireless network is 14 minutes and 50 seconds, fingerprint information of the previous trace 10s is acquired, that is, fingerprint information of 14 minutes and 40 seconds to 14 minutes and 50 seconds is acquired, and the fingerprint information is taken as scene fingerprint information corresponding to a scene leaving the target area.
In addition, the preset fingerprint information may be acquired by any terminal device and then sent to the target terminal device, or may be acquired by the target terminal device itself. Thus, any terminal may be the target terminal device or another terminal device.
Further, in order to determine that the current scene must be the scene leaving the target area and then acquire the scene fingerprint information, after determining to disconnect the target wireless network, the current scene may be observed for a period of time and then acquire the fingerprint information.
In this embodiment, before the step 302 of obtaining fingerprint information collected by the arbitrary terminal device in the first target period before the disconnection from the target wireless network, and obtaining the scene fingerprint information, the method may further include: and determining that the arbitrary terminal equipment is not connected with the target wireless network again within a preset time period after the connection is disconnected.
It should be noted that, after any terminal device is disconnected from the target wireless network, after a preset period of time, no connection is established with the target wireless network, so that the disconnection is not accidental disconnection or intentional disconnection, it can be determined that any terminal device is disconnected because of leaving the target area, and at this time, the corresponding fingerprint information is acquired, so that the accuracy of acquired scene fingerprint information can be improved, and the accuracy of judging that the user leaves the scene is further improved.
And in a second mode, acquiring preset fingerprint information in a scene entering the target area.
Fig. 5 is a flowchart of another method for obtaining preset fingerprint information according to an embodiment of the present application. As shown in fig. 5, the preset fingerprint information is scene fingerprint information, and the acquiring the preset fingerprint information by acquiring the fingerprint information of the target wireless network corresponding to the target area may include:
step 401: and determining that any terminal equipment is switched from a disconnection state to a connection state with the target wireless network.
Wherein any terminal device can monitor its disconnected state with the target wireless network to determine whether to switch to the connected state. The method for monitoring the connection state/disconnection state of the target wireless network is numerous, and the method for determining whether any terminal device and the target wireless network are switched from the disconnection state to the connection state is not limited in the embodiment of the present application.
Step 402: acquiring fingerprint information acquired by the arbitrary terminal equipment in a second target period after connection is established with the target wireless network, and obtaining scene fingerprint information, wherein the starting time of the second target period is the time of establishing the connection of the arbitrary terminal equipment, and the arbitrary terminal equipment is the target terminal equipment or other terminal equipment.
After determining that any terminal device establishes connection with the target wireless network, fingerprint information acquired by any terminal device in a second target period after the connection establishment time can be acquired and used as scene fingerprint information.
For example, if it is determined that the time for establishing connection between any terminal device and the target wireless network is 14 minutes and 50 seconds, fingerprint information of the following 10s is acquired, that is, fingerprint information of 14 minutes and 50 seconds to 14 minutes and 21 minutes is acquired, and the fingerprint information is taken as scene fingerprint information corresponding to a scene leaving the target area.
In addition, the preset fingerprint information may be acquired by any terminal device and then sent to the target terminal device, or may be acquired by the target terminal device itself. Thus, any terminal may be the target terminal device or another terminal device.
It can be understood that the fingerprint information of any terminal device just after connection with the target wireless network can represent the fingerprint information of the user entering the target area scene, or the fingerprint information of the user just before leaving the target area, and the fingerprint information corresponding to the period can be obtained, so that the accuracy of the obtained scene fingerprint information can be improved, and the accuracy of judging that the user leaves the scene is further improved.
In some embodiments, when the preset fingerprint information includes fingerprint information corresponding to a scene subspace, the preset fingerprint information may be acquired by acquiring fingerprint information meeting signal strength requirements to generate a subspace.
Fig. 6 is a flowchart of another method for obtaining preset fingerprint information according to an embodiment of the present application. As shown in fig. 6, the acquiring the preset fingerprint information by acquiring the fingerprint information of the target wireless network of the target area may include:
step 501: and acquiring fingerprint information within a preset signal intensity range in the target wireless network by any terminal equipment to obtain a plurality of initial fingerprint information, wherein the preset signal intensity range is determined according to scene fingerprint information, and the scene fingerprint information is fingerprint information indicating that the any terminal equipment is in a scene of the preset equipment.
It should be noted that, fingerprint information within a preset signal intensity range in the target wireless network is collected, and a plurality of initial fingerprint information can be obtained. The preset signal strength range is determined according to scene fingerprint information, and the scene fingerprint information indicates fingerprint information of any terminal equipment in a preset equipment scene. The signal strength corresponding to the scene fingerprint information is weak signal strength because the object wireless network is about to be left. And determining a proper signal intensity range according to the signal intensity corresponding to the scene fingerprint information, so that the initial fingerprint information can be accurately acquired.
Step 502: and clustering the plurality of initial fingerprint information to obtain at least one initial subspace.
The method for clustering the plurality of initial fingerprint information is many, for example, a DBSCAN clustering method may be adopted, and the method for clustering the plurality of initial fingerprint information to obtain at least one initial subspace in the embodiment of the present application is not limited. Wherein the obtained at least one initial subspace belongs to a weak signal subspace.
Step 503: and determining a scene subspace according to the similarity between the fingerprint information of the at least one initial subspace and the scene fingerprint information, wherein the similarity between the fingerprint information of the scene subspace and the scene fingerprint information is larger than or equal to a similarity threshold value.
It should be noted that, since the scene fingerprint information is fingerprint information indicating that the arbitrary terminal device is in the preset device scene, the similarity between each initial subspace and the scene fingerprint information may be obtained, and for the subspaces with the similarity greater than or equal to the similarity threshold, the subspaces may be determined as the scene subspaces. By adopting the method, the scene subspace can be determined in the plurality of weak signal subspaces, and the accuracy of determining the scene subspace is improved.
The method for determining the similarity between the fingerprint information of the at least one initial subspace and the scene fingerprint information is not limited in this embodiment.
In this embodiment, the determining, in step 503, the scene subspace according to the similarity between the fingerprint information of the at least one initial subspace and the scene fingerprint information may include: and respectively carrying out similarity calculation on the fingerprint information of each initial subspace in the at least one initial subspace and the scene fingerprint information to obtain a plurality of similarities corresponding to each initial subspace. And if the duty ratio of the target similarity in the plurality of similarities corresponding to the initial subspace is larger than a preset duty ratio, determining the initial subspace as the scene subspace, wherein the target similarity is larger than or equal to a similarity threshold value.
For example, the initial subspace (weak signal subspace) obtained by clustering and the scene fingerprint information (off-home scene tag) can be subjected to initial subspace labeling, and the fingerprint of the initial subspace and the tag fingerprint are subjected to pairwise similarity calculation (such as a Pearson correlation coefficient, cosine similarity, euclidean distance and other similarity algorithms); and in the two-dimensional array obtained by calculating each initial subspace and each label in pairs, 68% of similarity data are positively correlated, and the initial subspace is marked as a scene subspace.
An exemplary application of the embodiments of the present application in a practical application scenario will be described below.
Fig. 7 is a schematic diagram of an away-home scenario provided in an embodiment of the present application. The home scene comprises an entrance and an exit, a home departure path is arranged outside the entrance, a home departure area is arranged on the home departure path at a certain distance from the entrance, and the position of the home departure area farthest from the target area is determined according to the automatic disconnection time t of the terminal equipment and the home WiFi. The home scene comprises a plurality of subspaces obtained by clustering, and only the subspace corresponding to the away-from-home region is the away-from-home subspace and can be used for determining the away-from-home scene of the user. According to the scheme adopted by the embodiment of the application, scene fingerprint information is firstly obtained, a plurality of subspaces are obtained through clustering, the home-away subspaces are determined according to the scene fingerprint information and the subspaces, namely, the home-away areas in the corresponding graphs, and if terminal equipment of a user is positioned in the home-away subspaces, the user is determined to be in the home-away scene.
Fig. 8 is a flowchart of a subspace clustering method of an away-from-home scene according to an embodiment of the present application. Based on the schematic diagram of the away-from-home scenario shown in fig. 7, as shown in fig. 8, the method includes the following steps 601 to 612:
Step 601: and when the user leaves home, the smart phone automatically disconnects the home WiFi, records the moment t at the moment, and determines that the user leaves home if the smart phone is not connected with the WiFi any more within 10 minutes. The smart phone WiFi disconnection time is used as a leaving standard, and can be extended to other devices such as watches, pads, notebooks and the like.
Step 602: query for historical WiFi disconnect and out-of-fence events.
Step 603: and judging that the WiFi disconnection and fence leaving events are continuous, and the time difference is not more than 2 minutes. The home scene is provided with the departure fence and the WiFi, and the user is more accurate in departure of home through the departure fence and the WiFi.
Step 604: fingerprint data of weak signal intensity (less than 67 dbm) acquired during a time t (such as 10 seconds) is traced back.
Step 605: and the historical fingerprints in 10s are interpreted as outdoor single weak WiFi fingerprints and are used as tags of the away-from-home scenes.
Step 606: and judging that the user enters home. For example, may be determined by the user's smartphone in connection with WiFi in the home.
Step 607: when the WiFi signal intensity of the smart phone is in a weak signal (for example, -67 dbm), triggering data acquisition, acquiring fingerprint information of the current WiFi and cell, and storing the fingerprint information.
Step 608: one week or 1000 fingerprint data are collected in an accumulated way.
Step 609: and clustering weak signal fingerprints acquired by a user by adopting a clustering algorithm (for example, adopting a DBSCAN clustering algorithm).
Step 610: and generating a weak signal subspace according to the clustering result.
Step 611: carrying out subspace labeling on the clustered weak signal subspaces and the tags of the off-home scenes, and carrying out pairwise similarity calculation on fingerprints of the subspaces and the tags (for example, adopting similarity algorithms such as Pearson correlation coefficient, similarity, euclidean distance and the like); and marking the subspace as the away-from-home scene subspace if 68% of data in the two-dimensional array obtained by calculating each subspace and the labels are positively correlated.
Step 612: both the away-from-home scene subspace and the away-from-home scene tag are stored in a subspace database.
The sub-space of the away-home scene and the tag of the away-home scene are obtained through the steps, and in the subsequent application scene, when the user makes a positioning request, the current position is positioned in the sub-space of the away-home scene or the tag of the away-home scene, and the user is judged to enter the away-home scene.
According to the embodiment, when the user leaves home and the smart phone automatically disconnects WiFi, fingerprint information with weak signal intensity collected before is stored to be used as a leaving scene tag, the stored weak signal fingerprint data in the home are clustered in subspace, the label is used for marking the clustered subspace, the user leaving home subspace is obtained, and when the user appears in the leaving home subspace again, the user is judged to enter the leaving home scene, so that service is provided for subsequent recommendation application.
It should be understood that, although the steps in the above-described flowcharts are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, and the order of execution of the sub-steps or stages is not necessarily sequential, but may be performed in rotation or alternately with at least a portion of the sub-steps or stages of other steps or steps.
Based on the foregoing embodiments, the embodiments of the present application provide a scene determination device, where each module included in the device and each unit included in each module may be implemented by a processor; of course, the method can also be realized by a specific logic circuit; in an implementation, the processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 9 is a schematic structural diagram of a scene determining device according to an embodiment of the present application. As shown in fig. 9, the apparatus 700 includes a fingerprint acquisition module 701 and a scene determination module 702, wherein:
a fingerprint acquisition module 701, configured to acquire current fingerprint information, where the current fingerprint information is acquired according to a wireless network at a current location;
the scene determining module 702 is configured to determine that the target terminal device is in a preset device scene, where the preset device scene is a scene leaving a target area, when the current fingerprint information matches with preset fingerprint information corresponding to the preset device scene.
In some embodiments, the preset fingerprint information includes scene fingerprint information, where the scene fingerprint information is fingerprint information collected by any terminal device in an exit area, and the exit area is an area with a preset distance from the target area, and the any terminal device is the target terminal device or other terminal devices.
In some embodiments, the preset fingerprint information includes fingerprint information of a scene subspace, the scene subspace is generated according to fingerprint information clusters collected by at least one terminal device in the departure area, and the fingerprint information of the scene subspace is matched with the scene fingerprint information.
In some embodiments, the device further includes a preset fingerprint module, where the preset fingerprint module is configured to acquire the preset fingerprint information by acquiring fingerprint information of a target wireless network corresponding to the target area, where the target wireless network includes a wireless fidelity network.
In some embodiments, the preset fingerprint information is scene fingerprint information, and the preset fingerprint module is specifically configured to: determining that any terminal equipment and the target wireless network are switched from a connection state to a disconnection state; acquiring fingerprint information acquired by any terminal equipment in a first target period before the terminal equipment is disconnected with the target wireless network, and acquiring scene fingerprint information, wherein the ending time of the first target period is the time when the terminal equipment is disconnected with the terminal equipment, and the terminal equipment is the target terminal equipment or other terminal equipment.
In some embodiments, the preset fingerprinting module is further configured to: before the fingerprint information in a preset first target period before the random terminal equipment is disconnected with the target wireless network is acquired, determining that the random terminal equipment is not connected with the target wireless network again in a preset time period after the random terminal equipment is disconnected.
In some embodiments, the preset fingerprint information is scene fingerprint information, and the preset fingerprint module is specifically configured to: determining that any terminal equipment and the target wireless network are switched from a disconnected state to a connected state; acquiring fingerprint information acquired by the arbitrary terminal equipment in a second target period after connection is established with the target wireless network, and obtaining scene fingerprint information, wherein the starting time of the second target period is the time of establishing the connection of the arbitrary terminal equipment, and the arbitrary terminal equipment is the target terminal equipment or other terminal equipment.
In some embodiments, the preset fingerprint information includes fingerprint information of a scene subspace, the preset fingerprint module includes an acquisition unit, a clustering unit, and a determination unit, wherein,
the acquisition unit is used for acquiring fingerprint information in a preset signal intensity range in the target wireless network by any terminal equipment to obtain a plurality of initial fingerprint information, wherein the preset signal intensity range is determined according to scene fingerprint information, and the scene fingerprint information is fingerprint information indicating that the any terminal equipment is in a scene of the preset equipment;
The clustering unit is used for carrying out clustering processing on the plurality of initial fingerprint information to obtain at least one initial subspace;
the determining unit is configured to determine a scene subspace according to a similarity between the fingerprint information of the at least one initial subspace and the scene fingerprint information, where the similarity between the fingerprint information of the scene subspace and the scene fingerprint information is greater than or equal to a similarity threshold.
In some embodiments, the clustering unit is specifically configured to: respectively carrying out similarity calculation on each piece of fingerprint information of each initial subspace in the at least one initial subspace and the scene fingerprint information to obtain a plurality of similarities corresponding to each initial subspace; and if the duty ratio of the target similarity in the plurality of similarities corresponding to the initial subspace is larger than a preset duty ratio, determining the initial subspace as the scene subspace, wherein the target similarity is larger than or equal to a similarity threshold value.
In the embodiment of the application, the user leaving target area can be judged according to the fingerprint information of the current position of the user, a convenient, quick and noninductive leaving scene perception effect can be provided for the user, and services can be provided for follow-up recommendation applications.
The description of the apparatus embodiments above is similar to that of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the device embodiments of the present application, please refer to the description of the method embodiments of the present application for understanding.
It should be noted that, in the embodiment of the present application, the division of the modules by the scene determination device shown in fig. 9 is schematic, and is merely a logic function division, and there may be another division manner in actual implementation. In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units. Or in a combination of software and hardware.
It should be noted that, in the embodiment of the present application, if the method is implemented in the form of a software functional module, and sold or used as a separate product, the method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or part contributing to the related art, embodied in the form of a software product stored in a storage medium, including several instructions for causing a terminal device to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, an optical disk, or other various media capable of storing program codes. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Fig. 10 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 10, the terminal device 100 may include a processor 110, a memory 120, a wireless communication module 130, a mobile communication module 140, a camera 150, a usb interface 160, a display 170, and the like.
Processor 110 may include one or more processing units. For example, processor 110 is a central processing unit (central processing unit, CPU), may be an integrated circuit specific (application specific integrated circuit, ASIC), or may be one or more integrated circuits configured to implement embodiments of the present application, such as: one or more microprocessors (digital signal processor, DSPs), or one or more field programmable gate arrays (field programmable gate array, FPGAs). Wherein the different processing units may be separate devices or may be integrated in one or more processors.
Memory 120 may be used to store computer-executable program code that includes instructions. The internal memory 120 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data (such as audio data, video data, etc.) created during use of the terminal device 100, and the like. In addition, the memory 120 may include a high-speed random access memory, and may also include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications of the terminal device 100 and data processing by executing instructions stored in the memory 120 and/or instructions stored in a memory provided in the processor.
The wireless communication module 130 may provide solutions for wireless communication including WLAN, such as Wi-Fi network, bluetooth, NFC, IR, etc., applied on the terminal device 100. The wireless communication module 130 may be one or more devices integrating at least one communication processing module. In some embodiments of the present application, the terminal device 100 may establish a wireless communication connection with other terminal devices through the wireless communication module 130.
The mobile communication module 140 may provide a solution including 2G/3G/4G/5G wireless communication applied on the terminal device 100. The mobile communication module 140 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. In some embodiments, at least some of the functional modules of the mobile communication module 140 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 140 may be disposed in the same device as at least some of the modules of the processor 110.
The camera 150 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, the terminal device 100 may include 1 or N cameras 150, N being a positive integer greater than 1.
The USB interface 160 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 160 may be used to connect other terminal devices. In still other embodiments, the terminal device 100 may also be connected to a camera through the USB interface 160 for capturing images.
The display 170 is used to display images, videos, and the like. The display 170 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the terminal device 100 may include 1 or N displays 170, N being a positive integer greater than 1.
It is to be understood that the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the terminal device 100. In other embodiments of the present application, terminal device 100 may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
An embodiment of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps in the scene determination method provided in the above embodiment.
Any combination of one or more computer readable media may be utilized as the above-described computer readable storage media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (erasable programmable read only memory, EPROM) or flash memory, an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, radio Frequency (RF), etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for the present specification may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (local area network, LAN) or a wide area network (wide area network, WAN), or may be connected to an external computer (e.g., connected via the internet using an internet service provider).
The present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the steps of the scene determination method provided by the method embodiments described above.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
It should be noted here that: the description of the storage medium, program product, and apparatus embodiments above is similar to that of the method embodiments described above, with similar benefits as the method embodiments. For technical details not disclosed in the storage medium, storage medium and device embodiments of the present application, please refer to the description of the method embodiments of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" or "some embodiments" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" or "in some embodiments" in various places throughout this specification are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application. The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments. The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
The term "and/or" is herein merely an association relation describing associated objects, meaning that there may be three relations, e.g. object a and/or object B, may represent: there are three cases where object a alone exists, object a and object B together, and object B alone exists.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments are merely illustrative, and the division of the modules is merely a logical function division, and other divisions may be implemented in practice, such as: multiple modules or components may be combined, or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or modules, whether electrically, mechanically, or otherwise.
The modules described above as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules; can be located in one place or distributed to a plurality of network units; some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated in one processing unit, or each module may be separately used as one unit, or two or more modules may be integrated in one unit; the integrated modules may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read Only Memory (ROM), a magnetic disk or an optical disk, or the like, which can store program codes.
Alternatively, the integrated units described above may be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or part contributing to the related art, embodied in the form of a software product stored in a storage medium, including several instructions for causing a terminal device to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk.
The methods disclosed in the several method embodiments provided in the present application may be arbitrarily combined without collision to obtain a new method embodiment. The features disclosed in the several product embodiments provided in the present application may be combined arbitrarily without conflict to obtain new product embodiments. The features disclosed in the several method or apparatus embodiments provided in the present application may be arbitrarily combined without conflict to obtain new method embodiments or apparatus embodiments.
The foregoing is merely an embodiment of the present application, but the protection scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A scene determination method, characterized by being applied to a target terminal device, comprising:
acquiring current fingerprint information, wherein the current fingerprint information is acquired according to a wireless network at a current position;
and under the condition that the current fingerprint information is matched with preset fingerprint information corresponding to a preset device scene, determining that the target terminal device is in the preset device scene, wherein the preset device scene is a scene leaving a target area.
2. The method according to claim 1, wherein the preset fingerprint information includes scene fingerprint information, the scene fingerprint information being fingerprint information collected by any terminal device in an exit area, the exit area being an area at a preset distance from the target area, the any terminal device being the target terminal device or another terminal device.
3. The method of claim 2, wherein the preset fingerprint information comprises fingerprint information of a scene subspace, the scene subspace being generated according to a cluster of fingerprint information collected by at least one terminal device in the departure area, the fingerprint information of the scene subspace being matched with the scene fingerprint information.
4. The method of claim 1, wherein prior to the acquiring the current fingerprint information, the method further comprises:
and acquiring the preset fingerprint information by acquiring the fingerprint information of a target wireless network corresponding to the target area, wherein the target wireless network comprises a wireless fidelity network.
5. The method according to claim 4, wherein the preset fingerprint information is scene fingerprint information, and the acquiring the preset fingerprint information by acquiring fingerprint information of a target wireless network corresponding to the target area includes:
determining that any terminal equipment and the target wireless network are switched from a connection state to a disconnection state;
acquiring fingerprint information acquired by any terminal equipment in a first target period before the terminal equipment is disconnected with the target wireless network, and acquiring scene fingerprint information, wherein the ending time of the first target period is the time when the terminal equipment is disconnected with the terminal equipment, and the terminal equipment is the target terminal equipment or other terminal equipment.
6. The method of claim 5, wherein prior to the obtaining fingerprint information for a preset first target period of time prior to the disconnecting the arbitrary terminal device from the target wireless network, the method further comprises:
and determining that the arbitrary terminal equipment is not connected with the target wireless network again within a preset time period after the connection is disconnected.
7. The method according to claim 4, wherein the preset fingerprint information is scene fingerprint information, and the acquiring the preset fingerprint information by acquiring fingerprint information of a target wireless network corresponding to the target area includes:
determining that any terminal equipment and the target wireless network are switched from a disconnected state to a connected state;
acquiring fingerprint information acquired by the arbitrary terminal equipment in a second target period after connection is established with the target wireless network, and obtaining scene fingerprint information, wherein the starting time of the second target period is the time of establishing the connection of the arbitrary terminal equipment, and the arbitrary terminal equipment is the target terminal equipment or other terminal equipment.
8. The method according to claim 4, wherein the preset fingerprint information includes fingerprint information of a scene subspace, and the acquiring the preset fingerprint information by acquiring fingerprint information of a target wireless network of the target area includes:
Collecting fingerprint information in a preset signal intensity range by any terminal equipment in the target wireless network to obtain a plurality of initial fingerprint information, wherein the preset signal intensity range is determined according to scene fingerprint information, and the scene fingerprint information is fingerprint information indicating that the any terminal equipment is in a preset equipment scene;
clustering the plurality of initial fingerprint information to obtain at least one initial subspace;
and determining a scene subspace according to the similarity between the fingerprint information of the at least one initial subspace and the scene fingerprint information, wherein the similarity between the fingerprint information of the scene subspace and the scene fingerprint information is larger than or equal to a similarity threshold value.
9. The method of claim 8, wherein the determining a scene subspace based on a similarity between fingerprint information of the at least one initial subspace and the scene fingerprint information comprises:
respectively carrying out similarity calculation on each piece of fingerprint information of each initial subspace in the at least one initial subspace and the scene fingerprint information to obtain a plurality of similarities corresponding to each initial subspace;
And if the duty ratio of the target similarity in the plurality of similarities corresponding to the initial subspace is larger than a preset duty ratio, determining the initial subspace as the scene subspace, wherein the target similarity is larger than or equal to a similarity threshold value.
10. A scene determining apparatus, characterized by being applied to a target terminal device, comprising:
the fingerprint acquisition module is used for acquiring current fingerprint information, wherein the current fingerprint information is acquired according to a wireless network at a current position;
the scene determining module is used for determining that the target terminal equipment is in the preset equipment scene under the condition that the current fingerprint information is matched with preset fingerprint information corresponding to the preset equipment scene, wherein the preset equipment scene is a scene leaving a target area.
11. A terminal device comprising a memory and a processor, the memory storing a computer program executable on the processor, characterized in that the processor implements the steps of the scene determination method according to any of claims 1 to 9 when the program is executed.
12. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the scene determination method as claimed in any of claims 1 to 9.
CN202311467697.9A 2023-11-06 2023-11-06 Scene determination method and device, terminal equipment and storage medium Pending CN117596546A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311467697.9A CN117596546A (en) 2023-11-06 2023-11-06 Scene determination method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311467697.9A CN117596546A (en) 2023-11-06 2023-11-06 Scene determination method and device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117596546A true CN117596546A (en) 2024-02-23

Family

ID=89917382

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311467697.9A Pending CN117596546A (en) 2023-11-06 2023-11-06 Scene determination method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117596546A (en)

Similar Documents

Publication Publication Date Title
US10298537B2 (en) Apparatus for sharing image content based on matching
US20210406523A1 (en) Method and device for detecting living body, electronic device and storage medium
US20150055879A1 (en) Method, Server and System for Setting Background Image
US20150085111A1 (en) Identification using video analytics together with inertial sensor data
US8558864B1 (en) Identifying video conference participants
EP3110134B1 (en) Electronic device and method for processing image
US20220067379A1 (en) Category labelling method and device, and storage medium
CN111047621B (en) Target object tracking method, system, equipment and readable medium
US20140071273A1 (en) Recognition Based Security
CN110929770A (en) Intelligent tracking method, system and equipment based on image processing and readable medium
CN114153343B (en) Health code display method and electronic equipment
RU2654157C1 (en) Eye iris image production method and device and the eye iris identification device
WO2021180004A1 (en) Video analysis method, video analysis management method, and related device
EP3844667A1 (en) Methods and apparatus for reducing false positives in facial recognition
CN112749652A (en) Identity information determination method and device, storage medium and electronic equipment
US20170206644A1 (en) Light fixture fingerprint detection for position estimation
CN115238787A (en) Abnormal data detection method, device, equipment and storage medium
US11574502B2 (en) Method and device for identifying face, and computer-readable storage medium
CN108287873B (en) Data processing method and related product
CN110889346B (en) Intelligent tracking method, system, equipment and readable medium
CN117596546A (en) Scene determination method and device, terminal equipment and storage medium
CN116127127A (en) Video searching method, device, electronic device and storage medium
CN114697386A (en) Information notification method, device, terminal and storage medium
CN112686085A (en) Intelligent identification method applied to camera device, camera device and storage medium
KR20180046586A (en) Method and apparatus for recognizing object using multi-sensing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination