CN110672086B - Scene recognition method, device, equipment and computer readable medium - Google Patents

Scene recognition method, device, equipment and computer readable medium Download PDF

Info

Publication number
CN110672086B
CN110672086B CN201810720276.5A CN201810720276A CN110672086B CN 110672086 B CN110672086 B CN 110672086B CN 201810720276 A CN201810720276 A CN 201810720276A CN 110672086 B CN110672086 B CN 110672086B
Authority
CN
China
Prior art keywords
information
terminal equipment
acquiring
sensor
acquired
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810720276.5A
Other languages
Chinese (zh)
Other versions
CN110672086A (en
Inventor
刘博文
张晓迪
徐云峰
陈炜于
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201810720276.5A priority Critical patent/CN110672086B/en
Publication of CN110672086A publication Critical patent/CN110672086A/en
Application granted granted Critical
Publication of CN110672086B publication Critical patent/CN110672086B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching

Abstract

The invention provides a scene recognition method, a scene recognition device, equipment and a computer readable medium, wherein the method comprises the following steps: acquiring sensor information of the terminal equipment to acquire motion information and position information of the terminal equipment, acquiring acquisition time information corresponding to sensor data and encrypted terminal equipment identification information; predicting various behavior states and position states corresponding to the terminal equipment by utilizing the sensor information related to the motion information and the position information and the corresponding acquisition time information; and determining the current scene of the terminal equipment by combining the behavior state and the position state. The embodiment of the invention can be suitable for various application scenes and has higher universality. In addition, the embodiment of the invention combines various time scales, and the accuracy and the robustness of the predicted scene are high.

Description

Scene recognition method, device, equipment and computer readable medium
Technical Field
The present invention relates to the field of big data technologies, and in particular, to a scene recognition method, apparatus, device, and computer readable medium.
Background
The existing offline scene recognition technology mainly aims at a specific application scene (such as intelligent parking, indoor positioning and the like), obtains real-time information of a user through specific application software or an information collector, and returns a result by using specific information searching, matching and other methods based on a prepared database.
However, the existing scene recognition technology still has the following defects:
1. the scene type is single. Scene identification for a particular application scene is only for a single scene. For example, the application software of the intelligent driving class only identifies driving related scenes (starting, parking, accelerating and decelerating, approaching a parking lot, and the like); the application software of the intelligent retail class only identifies shopping related scenes (arriving at a shopping mall, passing through a store, purchasing and consuming, and the like).
2. The time scale is fixed. The existing method mainly carries out scene prediction aiming at real-time information returned by a user, ignores the long-term and periodic behavior characteristics of the user, and therefore has limited capability of depicting the scene; in addition, the auxiliary information provided by the time window with the variable length is ignored in the real-time scene recognition, and the scene recognition is easily influenced by short-time noise, so that the recognition error is caused.
3. Data sources only depend on a certain type of data, and the cost for building and maintaining a high-quality database is high, so that the accuracy and the breadth of scene identification are limited.
Disclosure of Invention
Embodiments of the present invention provide a scene recognition method, apparatus, device, and computer readable medium, so as to solve or alleviate one or more technical problems in the prior art.
In a first aspect, an embodiment of the present invention provides a scene identification method, including:
acquiring sensor information of the terminal equipment to acquire motion information and position information of the terminal equipment, acquiring acquisition time information corresponding to sensor data and encrypted terminal equipment identification information;
predicting various behavior states and position states corresponding to the terminal equipment by utilizing sensor information related to the motion information and the position information and corresponding acquisition time information;
and determining the current scene of the terminal equipment by combining the behavior state and the position state.
With reference to the first aspect, in a first implementation manner of the first aspect, the acquiring sensor information of a terminal device to acquire motion information and location information of the terminal device, and acquiring acquisition time information corresponding to the sensor data and encrypted terminal device identification information includes:
collecting sensor information of the terminal equipment at a certain time interval, and marking a timestamp for collecting the sensor information; acquiring motion information of the terminal equipment by using at least one of an accelerometer, a gyroscope, a level meter, a magnetometer and a gravimeter;
acquiring the position information of the terminal equipment by utilizing a GPS and/or WIFI fingerprint;
and acquiring and encrypting the mark information of the terminal equipment.
With reference to the first aspect, in a second implementation manner of the first aspect, the predicting various behavior states and location states corresponding to the terminal device by using the sensor information related to motion information and location information and the corresponding acquisition time information includes:
based on a preset behavior state classification rule or a pre-trained behavior state classification model, short-time state judgment and/or stable state judgment are/is carried out by utilizing sensor information related to the motion information and corresponding acquisition time information; and/or
Judging the current position by utilizing the sensor information related to the position information and the corresponding acquisition time information;
and performing long-term information association by using the terminal equipment mark information and the historical data information of the terminal equipment.
With reference to the second implementation manner of the first aspect, an embodiment of the present invention is implemented in a third implementation manner of the first aspect, where the determining of the short-time status includes:
acquiring sensor information related to the motion information of the terminal equipment acquired in a first time range, and determining the current short-time state of the terminal equipment by using the acquired sensor information.
With reference to the second implementation manner of the first aspect, an embodiment of the present invention is implemented in a fourth implementation manner of the first aspect, where the determining the stable state includes:
and acquiring sensor information related to the motion information of the terminal equipment acquired in a second time range, and determining the current stable state of the terminal equipment by using the acquired sensor information.
With reference to the second implementation manner of the first aspect, in a fifth implementation manner of the first aspect, the determining the current position includes:
and acquiring sensor information related to the position information of the terminal equipment acquired in a third time range, and determining the current position area of the terminal equipment by using the acquired sensor information.
With reference to the second implementation manner of the first aspect, in a sixth implementation manner of the first aspect, the performing long-term information association includes:
and acquiring the encrypted mark information of the terminal equipment acquired in a fourth time range, and associating long-term information of the terminal equipment by using the acquired mark information and historical data information of the terminal equipment, wherein the long-term information comprises a resident point, a frequent point and behavior pattern information under a subscriber line.
In a second aspect, an embodiment of the present invention further provides a scene recognition apparatus, including:
the information acquisition module is used for acquiring sensor information of the terminal equipment to acquire motion information and position information of the terminal equipment, acquiring acquisition time information corresponding to the sensor data and encrypted terminal equipment identification information;
the state prediction module is used for predicting various behavior states and position states corresponding to the terminal equipment by utilizing the sensor information related to the motion information and the position information and the corresponding acquisition time information;
and the scene identification module is used for determining the current scene of the terminal equipment by combining the behavior state and the position state.
With reference to the second aspect, in a first implementation manner of the second aspect, the information obtaining module includes:
the sensor information acquisition submodule is used for acquiring the sensor information of the terminal equipment at a certain time interval and marking a timestamp for acquiring the sensor information; the method comprises the steps that at least one of an accelerometer, a gyroscope, a level meter, a magnetometer and a gravimeter is used for obtaining motion information of the terminal equipment, and GPS and/or WIFI fingerprints are used for obtaining position information of the terminal equipment;
and the mark information acquisition submodule is used for acquiring the encrypted identification information of the terminal equipment.
With reference to the second aspect, in a second implementation manner of the second aspect, the state prediction module includes:
the behavior state judgment sub-module is used for carrying out short-time state judgment and/or stable state judgment by utilizing the sensor information related to the motion information and the corresponding acquisition time information thereof based on a preset behavior state classification rule or a pre-trained behavior state classification model; and/or
The position state judgment submodule is used for judging the current position by utilizing the sensor information related to the position information and the corresponding acquisition time information;
and the long-term information correlation submodule is used for performing long-term information correlation by using the mark information of the terminal equipment and the historical data information of the terminal equipment.
With reference to the second implementation manner of the second aspect, in a third implementation manner of the second aspect, the behavior state determining sub-module is specifically configured to acquire sensor information related to the motion information of the terminal device acquired within a first time range, and determine a current short-time state of the terminal device by using the acquired sensor information.
With reference to the second implementation manner of the second aspect, in a fourth implementation manner of the second aspect, in the embodiment of the present invention, the behavior state determination sub-module is specifically configured to acquire sensor information related to the motion information of the terminal device acquired in a second time range, and determine a current stable state of the terminal device by using the acquired sensor information.
With reference to the second implementation manner of the second aspect, in a fifth implementation manner of the second aspect, the location state judgment sub-module is specifically configured to acquire sensor information related to location information of the terminal device acquired within a third time range, and determine a current location area of the terminal device by using the acquired sensor information.
With reference to the second implementation manner of the second aspect, in a sixth implementation manner of the second aspect, the location state determining sub-module is specifically configured to obtain encrypted identifier information of the terminal device collected in a fourth time range, and associate long-term information of the terminal device by using the obtained identifier information and history data information of the terminal device, where the long-term information includes a stationary point, and behavior pattern information under a subscriber line.
The functions of the device can be realized by hardware, and can also be realized by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the above-described functions.
In a third aspect, in a possible design, the structure of the scene recognition apparatus includes a processor and a memory, the memory is used for storing a program that supports the scene recognition apparatus to execute the scene recognition method in the first aspect, and the processor is configured to execute the program stored in the memory. The scene recognition apparatus may further include a communication interface for the scene recognition apparatus to communicate with other devices or a communication network.
In a fourth aspect, an embodiment of the present invention provides a computer-readable medium for storing computer software instructions for a scene recognition apparatus, which includes a program for executing the scene recognition method according to the first aspect.
Compared with the prior art, the embodiment of the invention can be suitable for various application scenes and has higher universality. In addition, the embodiment of the invention combines various time scales, and the accuracy and the robustness of the predicted scene are high.
Furthermore, the embodiment of the invention considers the long-term information of the user, and has stronger capability of depicting the scene. Meanwhile, the embodiment of the invention fully utilizes various sensor information, and the data source and scene information are richer.
The foregoing summary is provided for the purpose of description only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features of the present invention will be readily apparent by reference to the drawings and following detailed description.
Drawings
In the drawings, like reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily to scale. It is appreciated that these drawings depict only some embodiments in accordance with the disclosure and are therefore not to be considered limiting of its scope.
FIG. 1 is a flow chart of a scene recognition method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps S100 according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating steps S200 according to an embodiment of the present invention;
FIG. 4 is a connection block diagram of a scene recognition device according to another embodiment of the present invention;
FIG. 5 is an internal block diagram of an information acquisition module according to another embodiment of the present invention;
FIG. 6 is an internal block diagram of a state prediction module according to another embodiment of the present invention;
fig. 7 is a block diagram of a scene recognition apparatus according to another embodiment of the present invention.
Detailed Description
In the following, only certain exemplary embodiments are briefly described. As those skilled in the art will recognize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present invention. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive. The embodiment of the invention mainly provides a method and a device for identifying a common scene, and the technical scheme is expanded and described through the following embodiments respectively.
The present invention provides a scene recognition method and apparatus, and the following describes in detail a specific processing flow and principle of the scene recognition method and apparatus according to the embodiments of the present invention.
Fig. 1 is a flowchart of a scene recognition method according to an embodiment of the present invention. The scene recognition method of the embodiment of the invention can comprise the following steps:
s100: and acquiring sensor information of the terminal equipment to acquire motion information and position information of the terminal equipment, acquiring acquisition time information corresponding to the sensor data and encrypted identification information of the terminal equipment.
The terminal equipment can be a mobile phone, a palm computer, a smart watch, a smart helmet, smart glasses or other smart terminals. As shown in fig. 2, in one embodiment, step S100 includes:
s110: collecting sensor information of the terminal equipment at a certain time interval, and marking a timestamp for collecting the sensor information; the motion information of the terminal equipment is acquired by using at least one of an accelerometer, a gyroscope, a level meter, a magnetometer and a gravimeter, and the position information of the terminal equipment is acquired by using a GPS and/or WIFI fingerprint.
The accelerometer can collect the current acceleration of the terminal equipment, the gyroscope can collect the rotating angular velocity of the current terminal equipment, the level meter can judge whether the current terminal equipment is horizontal, the magnetometer can judge the magnetic field environment change condition of the current terminal equipment, and the gravity sensor can judge the gravity size and direction on the current terminal equipment.
In addition, the location information of the terminal device is acquired through a GPS (Global Positioning System) and/or a WIFI (wireless fidelity).
When the terminal position information is acquired, the current position coordinate can be directly positioned through a GPS, and the position of the current terminal equipment can be calculated according to the WIFI fingerprint connected with the terminal and the signal intensity.
S120: and acquiring and encrypting the mark information of the terminal equipment.
S200: and predicting various behavior states and position states corresponding to the terminal equipment by utilizing the sensor information related to the motion information and the position information and the corresponding acquisition time information.
As shown in fig. 3, in one embodiment, the step S200 includes:
s210: based on a preset behavior state classification rule or a pre-trained behavior state classification model, performing short-time state judgment and/or stable state judgment by using sensor information related to the motion information and corresponding acquisition time information thereof; and/or
S220: judging the current position by utilizing the sensor information related to the position information and the corresponding acquisition time information;
s230: and performing long-term information association by using the terminal equipment mark information and the historical data information of the terminal equipment.
The state classification model and the position classification model can be trained in advance, then any one or more kinds of motion information and sensor information related to the position information are respectively input, and finally the motion state and the position state of the terminal device are obtained. The movement state of the terminal device may indicate what the (holder of the) terminal device is doing, among other things. The location status of a terminal device may indicate where (the holder of) the terminal device is. The motion state may include a short-time state and/or a steady state, etc. The location status may include a current status and a long-term status, etc.
In one embodiment, when determining the short-time state according to the motion information, the method may include:
and acquiring sensor information related to the motion information of the terminal equipment acquired in a first time range, and determining the current short-time state of the terminal equipment by using the acquired sensor information.
The first time range used in determining the short-term state may be relatively small, such as 1 second, 2 seconds, and the like. For example, the user's movement situation may be predicted at a 1 second time granularity based on a trained movement state determination model, e.g. (the holder of) the terminal device is currently in a short-time state of walking, stationary, running, cycling, driving, climbing, etc.
In one embodiment, the determining of the steady state may include:
and acquiring sensor information related to the motion information of the terminal equipment acquired in a second time range, and determining the current stable state of the terminal equipment by using the acquired sensor information.
Since the short-time state determination may be disturbed by instability factors, causing errors, a longer time window may be selected. For example, the second time range may be set to 5 seconds, 10 seconds, or the like. If the user determines "still" for 9 seconds and "running" for 1 second in the short-time state of the first 10 seconds, the user still determines "still" in the steady state. This is because there is a possibility that the terminal device is accelerated by the waving of the user's hand, and it can be determined that the current stable state of (the holder of) the terminal device is the stationary state.
In one embodiment, the current position determination comprises:
and acquiring sensor information related to the position information of the terminal equipment acquired in a third time range, and determining the current position area of the terminal equipment by using the acquired sensor information.
The current position of the terminal device can be judged through a GPS, a WIFI and a POI (Point of interest). The points of interest may include, for example: companies, stores, cells, etc. The POI-AOI (area of interest, interest area), the POI-AP (Access Point), and the like stored in the POI database can be used to determine the offline position visited by the user and the corresponding physical properties (name, industry type, and the like) thereof. If the user stays at a certain POI, the sensor information (e.g. collected by accelerometer, gyroscope, level, magnetometer, gravity sensor) of the user's terminal should be stable for a certain time span, so the third time span used may be longer than the previous time spans, such as: 1 minute, 5 minutes, etc. For example, it can be determined that the terminal device is in a current mall through current GPS positioning and a WIFI list. Further, if the terminal device is currently connected to WIFI of a restaurant in the mall, it may be further determined that the terminal device is currently in the restaurant.
In one embodiment, the long-term information association comprises:
and acquiring the encrypted mark information of the terminal equipment acquired in a fourth time range, and associating long-term information of the terminal equipment by using the acquired mark information and historical data information of the terminal equipment, wherein the long-term information comprises a resident point, a frequent point and behavior pattern information under a subscriber line.
Through long-time data accumulation and storage, long-term information of the user can be identified, wherein the long-term information comprises a stationing point (company, home and the like), a frequent visit point (frequently visited bus stations, subway stations, shopping malls and the like), a line descending mode (fitness route, commuting route, work and rest time and the like) and the like. Therefore, the fourth time range may take longer than the aforementioned time range, such as: 1 month, 2 months, etc.
By combining the real-time information and the long-term information of the user, more dimensional feature data can be extracted, and richer and more complex scene description is facilitated. For example, the resident point of the user includes a technology company, which indicates that the user may be a working group, and if the current visit information (current location) of the terminal device of the user is a primary school, the combination of the resident point and the visit information of the user can determine that the user scene is "child receiving", rather than "school leaving".
S300: and determining the current scene of the terminal equipment by combining the behavior state and the position state.
For example, after the collected motion-related sensor information, it is found that the motion state of the current user is still, and after the position-related sensor information is identified, it is found that the current position of the user is in a library, and after the combination, the obtained scenes are as follows: the user reads a book at the library.
For another example, if the user is still in motion state and in position state in the coffee shop, and then after long-term information is combined, it is found that both the working day and the working time of the user are in the coffee shop, the finally identified scenes are: the user is working in the coffee shop.
Compared with the prior art, the embodiment of the invention can be suitable for various application scenes and has higher universality. In addition, the embodiment of the invention combines various time scales, and the accuracy and the robustness of the predicted scene are high.
Furthermore, the embodiment of the invention considers the long-term information of the user, and has stronger capability of depicting the scene. Meanwhile, the embodiment of the invention fully utilizes various sensor information, and the data source and scene information are richer.
As shown in fig. 4, in another embodiment, an embodiment of the present invention further provides a scene recognition apparatus, including:
the information acquiring module 100 is configured to acquire sensor information of the terminal device to acquire motion information and position information thereof, and acquire acquisition time information corresponding to the sensor data and encrypted terminal device identification information.
And a state prediction module 200, configured to predict various behavior states and location states corresponding to the terminal device by using the sensor information related to the motion information and the location information and the corresponding acquisition time information.
And a scene identification module 300, configured to determine, by combining the behavior state and the location state, a current scene where the terminal device is located.
As shown in fig. 5, the information acquiring module 100 includes:
a motion information collecting sub-module 110, configured to collect sensor information of the terminal device at a certain time interval, and mark a timestamp for collecting the sensor information; the motion information of the terminal equipment is acquired by at least one of an accelerometer, a gyroscope, a level meter, a magnetometer and a gravimeter, and the position information of the terminal equipment is acquired by a GPS and/or WIFI fingerprint.
And the mark information acquisition submodule 120 is configured to acquire the encrypted identification information of the terminal device.
As shown in fig. 6, the state prediction module 200 includes:
a behavior state judgment sub-module 210, configured to perform short-term state judgment and/or steady state judgment and/or perform steady state judgment based on a preset behavior state classification rule or a pre-trained behavior state classification model by using the sensor information related to the motion information and the corresponding acquisition time information thereof
And the position state judgment submodule 220 is configured to perform current position judgment by using the sensor information related to the position information and the corresponding acquisition time information.
And the long-term information association submodule 230 is configured to perform long-term information association by using the terminal device identifier information and the historical data information of the terminal device.
The behavior state determination submodule 210 is specifically configured to acquire sensor information related to the motion information of the terminal device acquired in a first time range, and determine a current short-time state of the terminal device by using the acquired sensor information.
The behavior state determination submodule 210 is specifically configured to obtain sensor information related to the motion information of the terminal device collected in the second time range, and determine a current stable state of the terminal device by using the obtained sensor information.
The location status determining submodule 220 is specifically configured to acquire sensor information related to the location information of the terminal device acquired within a third time range, and determine a current location area of the terminal device by using the acquired sensor information.
The location status determining sub-module 220 is specifically configured to obtain encrypted flag information of the terminal device collected in a fourth time range, and correlate long-term information of the terminal device by using the obtained flag information and historical data information of the terminal device, where the long-term information includes behavior pattern information under a resident point, a frequent point, and a subscriber line.
In another embodiment, the present invention further provides a scene recognition apparatus, as shown in fig. 7, the apparatus including: a memory 510 and a processor 520, the memory 510 having stored therein computer programs that are executable on the processor 520. The processor 520, when executing the computer program, implements the scene recognition method in the above embodiments. The number of the memory 510 and the processor 520 may be one or more.
The apparatus further comprises:
the communication interface 530 is used for communicating with an external device to perform data interactive transmission.
Memory 510 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
If the memory 510, the processor 520, and the communication interface 530 are implemented independently, the memory 510, the processor 520, and the communication interface 530 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 7, but that does not indicate only one bus or one type of bus.
Optionally, in a specific implementation, if the memory 510, the processor 520, and the communication interface 530 are integrated on a chip, the memory 510, the processor 520, and the communication interface 530 may complete mutual communication through an internal interface.
In the description of the specification, reference to the description of "one embodiment," "some embodiments," "an example," "a specific example," or "some examples" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to implicitly indicate the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one of the feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The computer readable medium described in embodiments of the present invention may be a computer readable signal medium or a computer readable storage medium or any combination of the two. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable read-only memory (CDROM). Additionally, the computer-readable storage medium may even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
In embodiments of the present invention, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, input method, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, radio Frequency (RF), etc., or any suitable combination of the foregoing.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following technologies, which are well known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a computer readable storage medium. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (6)

1. A method for scene recognition, comprising:
acquiring sensor information of the terminal equipment to acquire motion information and position information of the terminal equipment, acquiring acquisition time information corresponding to sensor data and encrypted terminal equipment identification information;
predicting various behavior states and position states corresponding to the terminal equipment by utilizing the sensor information related to the motion information and the position information and the corresponding acquisition time information;
determining the current scene of the terminal equipment by combining the behavior state and the position state;
the predicting of various behavior states and position states corresponding to the terminal device by using the sensor information related to the motion information and the position information and the corresponding acquisition time information comprises the following steps:
based on a preset behavior state classification rule or a pre-trained behavior state classification model, short-time state judgment and/or stable state judgment are/is carried out by utilizing sensor information related to the motion information and corresponding acquisition time information;
the short-time state judgment comprises the following steps:
acquiring sensor information related to motion information of the terminal equipment acquired in a first time range, and determining the current short-time state of the terminal equipment by using the acquired sensor information;
the steady state determination includes:
acquiring sensor information related to the motion information of the terminal equipment acquired within a second time range, and determining the current stable state of the terminal equipment by using the acquired sensor information;
judging the current position by utilizing the sensor information related to the position information and the corresponding acquisition time information thereof;
performing long-term information association by using the terminal equipment mark information and the historical data information of the terminal equipment;
the current position determination includes:
acquiring sensor information related to the position information of the terminal equipment acquired within a third time range, and determining a current position area of the terminal equipment by using the acquired sensor information;
the long-term information association comprises:
and acquiring the encrypted mark information of the terminal equipment acquired in a fourth time range, and associating long-term information of the terminal equipment by using the acquired mark information and historical data information of the terminal equipment, wherein the long-term information comprises a resident point, a frequent point and behavior pattern information under a subscriber line.
2. The method of claim 1, wherein the acquiring sensor information of the terminal device to obtain motion information and position information thereof, acquiring acquisition time information corresponding to the sensor data, and encrypted terminal device identification information comprises:
collecting sensor information of the terminal equipment at certain time intervals, and marking a timestamp for collecting the sensor information; acquiring motion information of the terminal equipment by using at least one of an accelerometer, a gyroscope, a level meter, a magnetometer and a gravimeter, and acquiring position information of the terminal equipment by using a GPS and/or WIFI fingerprint;
and acquiring and encrypting the mark information of the terminal equipment.
3. A scene recognition apparatus, comprising:
the information acquisition module is used for acquiring sensor information of the terminal equipment to acquire motion information and position information of the terminal equipment, acquiring acquisition time information corresponding to the sensor data and encrypted terminal equipment identification information;
the state prediction module is used for predicting various behavior states and position states corresponding to the terminal equipment by utilizing the sensor information related to the motion information and the position information and the corresponding acquisition time information;
the scene identification module is used for determining the current scene of the terminal equipment by combining the behavior state and the position state;
the state prediction module comprises:
the behavior state judgment sub-module is used for carrying out short-time state judgment and/or stable state judgment by utilizing the sensor information related to the motion information and the corresponding acquisition time information thereof based on a preset behavior state classification rule or a pre-trained behavior state classification model;
the behavior state judgment submodule is specifically used for acquiring sensor information related to the motion information of the terminal equipment acquired in a first time range, and determining the current short-time state of the terminal equipment by using the acquired sensor information;
the behavior state judgment submodule is specifically used for acquiring sensor information related to the motion information of the terminal equipment acquired within a second time range, and determining the current stable state of the terminal equipment by using the acquired sensor information; the position state judgment submodule is used for judging the current position by utilizing the sensor information related to the position information and the corresponding acquisition time information;
the long-term information correlation submodule is used for carrying out long-term information correlation by utilizing the mark information of the terminal equipment and the historical data information of the terminal equipment;
the position state judging submodule is specifically used for acquiring sensor information related to the position information of the terminal equipment acquired in a third time range, and determining a current position area of the terminal equipment by using the acquired sensor information;
the long-term information association submodule is specifically configured to acquire encrypted flag information of the terminal device acquired in a fourth time range, and associate long-term information of the terminal device with the acquired flag information and historical data information of the terminal device, where the long-term information includes behavior pattern information under a resident point, a frequent point and a subscriber line.
4. The apparatus of claim 3, wherein the information obtaining module comprises:
the sensor information acquisition submodule is used for acquiring the sensor information of the terminal equipment at a certain time interval and marking a timestamp for acquiring the sensor information; the method comprises the steps that at least one of an accelerometer, a gyroscope, a level meter, a magnetometer and a gravimeter is used for obtaining motion information of the terminal equipment, and GPS and/or WIFI fingerprints are used for obtaining position information of the terminal equipment;
and the mark information acquisition submodule is used for acquiring the encrypted identification information of the terminal equipment.
5. A scene recognition apparatus, characterized in that the apparatus comprises:
one or more processors;
storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the scene recognition method recited in any of claims 1-2.
6. A computer-readable medium, in which a computer program is stored which, when being executed by a processor, carries out the scene recognition method according to any one of claims 1-2.
CN201810720276.5A 2018-07-03 2018-07-03 Scene recognition method, device, equipment and computer readable medium Active CN110672086B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810720276.5A CN110672086B (en) 2018-07-03 2018-07-03 Scene recognition method, device, equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810720276.5A CN110672086B (en) 2018-07-03 2018-07-03 Scene recognition method, device, equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN110672086A CN110672086A (en) 2020-01-10
CN110672086B true CN110672086B (en) 2023-01-31

Family

ID=69065792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810720276.5A Active CN110672086B (en) 2018-07-03 2018-07-03 Scene recognition method, device, equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN110672086B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111414900B (en) * 2020-04-30 2023-11-28 Oppo广东移动通信有限公司 Scene recognition method, scene recognition device, terminal device and readable storage medium
CN114979949B (en) * 2022-07-26 2022-12-27 荣耀终端有限公司 Flight state identification method and flight state identification device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680046A (en) * 2013-11-29 2015-06-03 华为技术有限公司 User activity recognition method and device
CN107094177A (en) * 2017-04-28 2017-08-25 北京小米移动软件有限公司 Determine the method and device of scene

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110114732A (en) * 2007-09-24 2011-10-19 애플 인크. Embedded authentication systems in an electronic device
CN106408026B (en) * 2016-09-20 2020-04-28 百度在线网络技术(北京)有限公司 User travel mode identification method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680046A (en) * 2013-11-29 2015-06-03 华为技术有限公司 User activity recognition method and device
CN107094177A (en) * 2017-04-28 2017-08-25 北京小米移动软件有限公司 Determine the method and device of scene

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
可穿戴式个人室内位置和行为;胡秋扬;《中国优秀硕士学位论文全文数据库信息科技辑》;20151215(第12期);I140-366-第11、17-20、22-23、40、49页 *

Also Published As

Publication number Publication date
CN110672086A (en) 2020-01-10

Similar Documents

Publication Publication Date Title
CN107230046B (en) Travel information prompting method and mobile terminal
CN110909096B (en) Method and device for determining recommended boarding point, storage medium and electronic equipment
CN108827307B (en) Navigation method, navigation device, terminal and computer readable storage medium
US9200918B2 (en) Intelligent destination recommendations based on historical data
CN105528359B (en) For storing the method and system of travel track
CN104798420A (en) System and method for pilot sequence design in a communications system
CN110619027B (en) House source information recommendation method and device, terminal equipment and medium
CN105180951B (en) Route planning for vehicles
CN110672086B (en) Scene recognition method, device, equipment and computer readable medium
CN106779174A (en) Route planning method, apparatus and system
CN110967006A (en) Navigation positioning method and device based on tunnel map, storage medium and terminal equipment
CN110647231A (en) Data processing method, device and machine readable medium
CN107135483B (en) Method for determining relative distance change trend, indoor positioning method and device thereof
CN113701765A (en) Offline vehicle searching method, device, equipment and storage medium
JP2018190468A (en) Calculation device, calculation method, and calculation program
CN109218976A (en) Resident POI determines method, apparatus, terminal and storage medium
CN103791908B (en) The method of data record for updating and for examining navigational route database
CN115866751B (en) Positioning method and device based on fixed beacons and indoor map
CN111402620A (en) Arrival reminding method, device, terminal and storage medium
JP6389301B1 (en) Determination device, determination method, and determination program
CN105246155B (en) A kind of method and device positioning network access equipment geographical location
CN113207082B (en) Mobile network data positioning system and method based on traffic route position fingerprint database
CN111222056B (en) Matching method, device, equipment and medium of related users
CN112908023A (en) Parking navigation method and device, computer equipment and storage medium
JP5578173B2 (en) Positioning system, positioning method, server device, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant