WO2023280273A1 - Système et procédé de commande - Google Patents

Système et procédé de commande Download PDF

Info

Publication number
WO2023280273A1
WO2023280273A1 PCT/CN2022/104406 CN2022104406W WO2023280273A1 WO 2023280273 A1 WO2023280273 A1 WO 2023280273A1 CN 2022104406 W CN2022104406 W CN 2022104406W WO 2023280273 A1 WO2023280273 A1 WO 2023280273A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
security
preset
video
area
Prior art date
Application number
PCT/CN2022/104406
Other languages
English (en)
Chinese (zh)
Inventor
李涛
孙福尧
俞泓
卓训隆
刘楠城
唐皓
邹勇
Original Assignee
云丁网络技术(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202121549850.9U external-priority patent/CN215298315U/zh
Priority claimed from CN202110929241.4A external-priority patent/CN115941882A/zh
Priority claimed from CN202110928953.4A external-priority patent/CN115706837A/zh
Priority claimed from CN202111568028.1A external-priority patent/CN113971782B/zh
Priority claimed from CN202111608219.6A external-priority patent/CN113992859A/zh
Priority claimed from CN202210100036.1A external-priority patent/CN114139021B/zh
Priority claimed from CN202210137781.3A external-priority patent/CN114205565B/zh
Application filed by 云丁网络技术(北京)有限公司 filed Critical 云丁网络技术(北京)有限公司
Priority to CN202280048533.XA priority Critical patent/CN117730524A/zh
Publication of WO2023280273A1 publication Critical patent/WO2023280273A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit

Definitions

  • This description relates to the technical field of control systems, in particular to a control method and system related to smart devices.
  • One of the embodiments of this specification provides a control method, including: based on the trigger information of the smart device, acquiring related information collected by one or more collection devices of the smart device; The trigger information is processed to control the smart device to perform corresponding operations.
  • the embodiments of this specification provide an intelligent device, which is used to at least partially execute the control method of the embodiments of this specification.
  • the intelligent device includes: a first body and one or more collection devices, and the one or more collection devices At least one of the devices is disposed on the first body.
  • the embodiments of this specification provide a control system, including: a storage device storing instructions; and one or more processors communicating with the storage device, wherein, when executing the instructions, the one or more processors
  • the system is configured to: based on the trigger information of the smart device, obtain relevant information collected by one or more collection devices of the smart device; process the relevant information and/or the trigger information based on a preset algorithm , to control the smart device to perform a corresponding operation.
  • the embodiments of this specification provide a computer-readable storage medium, the storage medium stores computer instructions, and when the instructions are executed by one or more processors of the system, the system: based on the trigger information of the smart device, Obtain relevant information collected by one or more collection devices of the smart device; process the relevant information and/or the trigger information based on a preset algorithm, and control the smart device to perform corresponding operations.
  • the embodiments of this specification provide a method for controlling a smart device, including: a communication module in the smart device receives trigger information; in response to the trigger information, the communication module is in a job matching the trigger information mode, and control the working state of at least one acquisition device in the smart device through the communication module.
  • the embodiments of this specification provide a method for processing security information, including: after obtaining the security information used to indicate the occurrence of a security event, controlling one or more acquisition devices to collect information on the security area; A security mark is added to the information collected by the collection device, and the security mark matches the security event indicated by the security information.
  • the embodiments of this specification provide a security information processing device, including: a collection control unit and an identification adding unit, and the collection control unit is configured to execute: after obtaining the security information used to indicate the occurrence of a security event, control a or a plurality of acquisition devices collect information on the security area; the identification adding unit is configured to perform: adding a security identification to the information collected by the one or more acquisition devices, the security identification and the security information indicated Security event matching.
  • the embodiments of this specification provide a monitoring video distribution method, including: acquiring a first video in a first preset area, and a second video in a second preset area, the security level of the first preset area is higher than The security level of the second preset area; determine the activity track of the preset object based on the first video; determine the activity scene of the preset object based on the activity track; identify the preset object based on the second video Set the identity of the object; based on the activity scene and the identity, determine the pushing level of the first video and/or the second video; based on the pushing level, distribute the first video to one or more users video and/or said second video.
  • the embodiments of this specification provide a monitoring video distribution system, including: a video acquisition module, configured to acquire a first video in a first preset area and a second video in a second preset area, the first preset area The security level of the second preset area is higher than the security level of the second preset area; the trajectory determination module is used to determine the activity trajectory of the preset object based on the first video; the scene determination module is used to determine the activity trajectory based on the activity trajectory.
  • the activity scene of the preset object used to identify the identity of the preset object based on the second video; the push level determination module, used to determine the first based on the activity scene and the identity A pushing level of the video and/or the second video; and a distribution module, configured to distribute the first video and/or the second video to one or more users based on the pushing level.
  • the embodiments of this specification provide a method for comprehensive monitoring information management, including: acquiring a first video of a first preset area, and judging based on the first video that there is a preset target object in the first preset area Probability; when the probability that the preset target object exists in the first preset area satisfies a preset condition, acquire a second video in a second preset area; based on the first video and the second video , generate comprehensive monitoring information and send it to the target terminal.
  • the embodiments of this specification provide a comprehensive monitoring information management system, including: a first acquisition module, configured to acquire a first video in a first preset area, and judge the memory in the first preset area based on the first video. The probability of the preset target object; the second acquisition module is used to acquire the second preset area in the second preset area when the probability of the preset target object in the first preset area satisfies the preset condition.
  • a video; generating module configured to generate comprehensive monitoring information based on the first video and the second video and send it to a target terminal.
  • the embodiments of this specification provide a method for improving image quality, including: performing face recognition on the current image to obtain the face area; obtaining the exposure value of the current image in response to turning off the strong light suppression of the current image ; Based on the preset exposure scene corresponding to the exposure value, the exposure weights inside and outside the face area are respectively set, so that the current image is processed based on the set exposure weight to obtain an image of the face area, wherein
  • the preset exposure scene includes exposure weights inside and outside the face area corresponding to the exposure value.
  • the embodiments of this specification provide an image quality improvement device, including: a face recognition module, configured to perform face recognition on the current image to obtain a face area; an exposure value acquisition module, configured to obtain The exposure value of the current image after the strong light of the image is suppressed; and a weight adjustment module, which is used to set the exposure weights inside and outside the face area based on the preset exposure scene corresponding to the exposure value, so that based on The set exposure weight processes the current image to obtain an image of the face area, wherein the preset exposure scene includes exposure weights inside and outside the face area corresponding to the exposure value.
  • the embodiments of this specification provide an index information management method, the method comprising: obtaining a request from a user to access an item management area; based on the access request, performing security verification on the identity of the user, and generating access information, the The item management area is a closable space; acquire the item information and/or the access information of the item in the item management area during the access of the trusted user; based on the item information and/or the access information of the item , determining index information, where the index information is determined based at least on the access information.
  • an embodiment of the present specification provides an index information management system, the system includes: a security module configured to: obtain a request from a user to access an item management area; and perform security verification on the identity of the user based on the access request , and generate access information, the item management area is a closable space; and a management module configured to: acquire item information of the item management area and/or access information of the item during the trusted user's visit; Based on the item information and/or access information of the item, index information is determined, the index information is also determined at least based on the access information.
  • the embodiments of this specification provide a multimedia device, including: a first body, set on a smart device; at least one target structure, and the target structure is set on the first body; wherein, the target structure includes at least A multimedia collection device, the multimedia collection device is used to collect multimedia data in a target area, and the target area corresponds to the smart device.
  • the embodiments of this specification provide an intelligent security device on the one hand, including: a second body; a lock body, the lock body is arranged on the second body, and the lock body is used to lock the intelligent security device and the intelligent security device An object corresponding to the security device; a multimedia device set on the second body; wherein, the multimedia device at least includes: a first body set on the second body; at least one target structure, the target structure It is arranged on the first body; wherein, the target structure at least includes a multimedia collection device, and the multimedia collection device is used to collect multimedia data in a target area, and the target area corresponds to the intelligent security device.
  • Fig. 1 is a schematic diagram of an application scenario of a control method according to some embodiments of this specification
  • Fig. 2 is an exemplary block diagram of a control system according to some embodiments of the present specification
  • Fig. 3 is an exemplary flow chart of a control method according to some embodiments of this specification.
  • Fig. 4 is an exemplary flow chart of a smart device control method according to some embodiments of this specification.
  • Fig. 5 is an exemplary flow chart of a method for controlling a smart device according to other embodiments of this specification.
  • Fig. 6 is an exemplary flowchart of a method for controlling a smart device according to other embodiments of this specification.
  • Fig. 7 is an exemplary flowchart of a method for processing security information according to some embodiments of this specification.
  • Fig. 8 is an exemplary flowchart of a method for processing security information according to other embodiments of this specification.
  • Fig. 9 is an exemplary flowchart of a method for processing security information according to other embodiments of this specification.
  • Fig. 10 is an exemplary flowchart of a method for processing security information according to other embodiments of this specification.
  • Fig. 11 is an exemplary flowchart of a method for processing security information according to other embodiments of this specification.
  • Fig. 12 is an exemplary process diagram of synchronizing security signs between a collection device and a smart device according to some embodiments of this specification;
  • Fig. 13 is an exemplary process diagram of a smart device transmitting data to a collection device according to some embodiments of the present specification
  • Fig. 14 is an exemplary process diagram of a collection device transmitting data to a smart device according to some embodiments of the present specification
  • Fig. 15 is an exemplary flow chart of a monitoring video distribution method according to some embodiments of this specification.
  • Fig. 16 is an exemplary flowchart of a method for determining an active scene according to some embodiments of this specification.
  • Fig. 17 is an exemplary flowchart of a method for identifying an identity according to some embodiments of this specification.
  • Fig. 18 is another exemplary flowchart of a method for identifying an identity according to some embodiments of this specification.
  • Fig. 19 is a schematic diagram of a monitoring video distribution method according to some embodiments of the present specification.
  • Fig. 20 is an exemplary flow chart of a comprehensive monitoring information management method according to some embodiments of this specification.
  • Fig. 21 is a schematic diagram of a first preset area and a second preset area according to some embodiments of the present specification
  • Fig. 22 is an exemplary flow chart of generating comprehensive monitoring information and sending it to a target terminal according to some embodiments of this specification;
  • Fig. 23 is another exemplary flow chart for generating comprehensive monitoring information and sending it to a target terminal according to some embodiments of this specification;
  • Fig. 24 is another exemplary flow chart for generating comprehensive monitoring information and sending it to a target terminal according to some embodiments of this specification;
  • Fig. 25 is a schematic diagram of determining that the operation of the security device is abnormal according to some embodiments of the present specification.
  • Fig. 26 is an exemplary flow chart of processing a face region image according to some embodiments of the present specification.
  • Fig. 27 is an exemplary flow chart of processing a face region image according to other embodiments of the present specification.
  • Fig. 28 is a schematic diagram of re-dividing the current image according to some embodiments of the present specification.
  • Fig. 29 is an exemplary flow chart of performing supplementary light processing on a face region image according to some embodiments of the present specification
  • Fig. 30 is an exemplary flow chart of performing enhancement processing on a face region image according to some embodiments of the present specification
  • Fig. 31 is a block diagram of an exemplary smart storage device 20 according to some embodiments of the present specification.
  • FIG. 32 is a block diagram of an exemplary index information management system 30 according to some embodiments of the present specification.
  • Fig. 33 is a block diagram of another exemplary index information management system 40 according to some embodiments of the present specification.
  • Fig. 34 is an exemplary flowchart of index information management according to some embodiments of this specification.
  • FIG 35 is an exemplary flowchart of authentication steps according to some embodiments of this specification.
  • Fig. 36 is an exemplary application scenario diagram of an index information management system according to some embodiments of this specification.
  • Fig. 37 is a schematic structural diagram of a multimedia device shown in some embodiments of this specification.
  • Fig. 38 is an application example diagram of a multimedia device shown in some embodiments of this specification.
  • Fig. 39 is an application example diagram of a multimedia device shown in other embodiments of this specification.
  • Fig. 40 is a schematic structural diagram of a multimedia device shown in other embodiments of this specification.
  • Fig. 41 is a schematic structural diagram of a multimedia device embedded in a smart device shown in some embodiments of this specification.
  • Fig. 42 is a schematic structural diagram of a multimedia device embedded in a smart device shown in other embodiments of this specification.
  • Fig. 43 is a schematic structural diagram of a multimedia device shown in other embodiments of this specification.
  • Fig. 44 is a schematic structural diagram of a multimedia device shown in other embodiments of this specification.
  • Fig. 45 is a schematic structural diagram of a multimedia device shown in other embodiments of this specification.
  • Fig. 46 is a schematic structural diagram of a multimedia device shown in other embodiments of this specification.
  • Fig. 47 is a schematic structural diagram of a multimedia device shown in other embodiments of this specification.
  • Fig. 48 is a schematic diagram of the setting position of the smart lock doorbell shown in some embodiments of this specification.
  • Fig. 49 is a schematic structural diagram of a multimedia device shown in other embodiments of this specification.
  • Figures 50 to 59 are schematic diagrams of the layout of the components contained in the face recognition device shown in some embodiments of this specification;
  • 60 to 62 are schematic structural diagrams of multimedia devices shown in other embodiments of this specification.
  • Fig. 63 is a schematic structural diagram of a smart device shown in some embodiments of this specification.
  • Fig. 64 is a schematic structural diagram of the smart lock shown in some embodiments of this specification when it is locked;
  • Fig. 65 is a schematic diagram of the video module of the smart lock shown in some embodiments of this specification.
  • Fig. 66 is a schematic layout diagram of an IR projector, an IR flood light source and an IR camera in a face recognition device for a smart lock shown in some embodiments of this specification.
  • system means for distinguishing different components, elements, parts, parts or assemblies of different levels.
  • the words may be replaced by other expressions if other words can achieve the same purpose.
  • Fig. 1 is a schematic diagram of an application scenario of a control method according to some embodiments of this specification.
  • control method and system in one or more embodiments of this specification can be applied to various scenarios in the security field, such as banks, hotels, hospitals, computer rooms, warehouses, confidential rooms, office buildings, office areas, schools, kindergartens, Residential areas, factories, elevators, etc.
  • the control method and system can be used in various spaces or areas that need to be managed, for example, it can be applied to doors or windows in the spaces or areas that need to be managed, so as to control the opening and closing of the spaces or areas.
  • the control method and system can also be used for various windows, balconies, rooftops, and the like.
  • the control method and system can be used in bank counters, ATM machines, and the like.
  • control method and system can be used in various storage devices such as storage boxes, safes, and safes.
  • control method and system can be used in monitoring systems of warehouses, garages, rental houses, apartments, schools, dormitories, prisons, and the like.
  • control method and system in some embodiments of this description can control the working status of the smart device or its components, so as to perform security control on the security area related to the smart device (eg, acquire and process information related to security control).
  • the control system and method can control the smart device to collect relevant information of the security area, and control the smart device to perform corresponding operations (for example, record relevant information, report information, alarm, etc.).
  • the control system and method can generate index information related to smart devices for preset objects (such as people or objects) and their changes in related information.
  • control system and method in some embodiments of this specification can control the opening or closing of the smart device or its management area, and can also manage the information of preset objects related to the smart device.
  • the control system and method can be applied to the control of a safe deposit box used by a single person or multiple people and the information management of stored personal items in a household use scenario.
  • the control system and method can be applied to the control of storage cabinets used by one or more people in office use scenarios, and the information management of stored public and private goods.
  • the control system and method can be applied to warehouse control, information management of items stored in the warehouse, and the like.
  • control system and method can be applied to the control of smart access control in usage scenarios such as homes and the management of trusted users entering and leaving the smart access control.
  • the application scenario 1000 involved in the control method provided by some embodiments of this specification may include a smart device 110 , a server 140 , a processor 150 , a terminal device 160 , a network 170 and a storage device 180 .
  • the smart device 110 may include a device that provides security functionality.
  • the smart device 110 may include smart access control 110-1, smart window 110-2, smart lock 110-3, smart storage device 110-4, smart monitoring device, etc., or other devices with security functions. one or any combination thereof.
  • the smart device 110 may be preset with a security area and be able to acquire information in the security area.
  • the security area may include, but not limited to, the installation area of the smart device 110 , the area around the installation location, the management area, the set work area, and the like.
  • the smart access control 110-1 can obtain or monitor information on the door and the environment inside and outside the door.
  • Smart access control 110-1 can be an access control system for doors in various places, including but not limited to access control systems for entry doors, unit doors, building doors, villa doors, courtyard doors, garage doors, warehouse doors, etc.
  • the smart window 110-2 can acquire or monitor information on the window and the environment inside and outside the window.
  • the smart lock 110-3 may include a video lock equipped with one or more cameras.
  • the smart lock 110-3 can obtain or monitor information in the management area of the lock and the environment around the lock.
  • the smart storage device 110-4 can acquire information inside and outside its management area.
  • the smart storage device 110-4 may include, but is not limited to, a safe, a safe deposit box, a key box, a gun box, a jewelry box, a medicine box, and the like.
  • Intelligent monitoring devices may include but are not limited to warehouse monitoring devices, garage monitoring devices, supermarket monitoring devices, shopping malls monitoring devices, dormitory monitoring devices, rental room monitoring devices, apartment monitoring devices, hospital monitoring devices, school monitoring devices, prison monitoring devices, etc.
  • the smart device 110 may include one or more collection devices 1111 .
  • the smart device 110 can acquire information (for example, trigger information or related information) in the security area through the acquisition means 1111 .
  • the collection device 1111 may include a first collection device 120 and a second collection device 130 .
  • the first collection device 120 may collect information of a first preset area
  • the second collection device 130 may collect information of a second preset area (for example, trigger information or related information).
  • a second preset area for example, trigger information or related information
  • Server 140 may be used to manage resources and process data and/or information from at least one component of the system or from an external data source (eg, a cloud data center).
  • the server 140 may process trigger information or related information acquired by the smart device 110 .
  • the server 140 may process the user information acquired by the smart device 110, and complete security verification of the user according to the user information.
  • the server 140 may process the detection information related to the preset object acquired by the smart device 110, and determine the preset object information based on the detection information.
  • the server 140 may identify the service request of the trusted user and provide the service to the trusted user based on the service request.
  • the server 140 can communicate with other devices in the control system through the network 170 .
  • the smart device 110 may have a networking function, and the server 140 may directly communicate with the smart device 110 through the network 170 . In some embodiments, the smart device 110 may not have a networking function, and the server 140 may communicate with the smart device 110 through the terminal device 160 .
  • the server 140 may be hardware or software.
  • the server 140 may be a hardware device provided on a smart device to provide data processing functions.
  • the server 140 may be software enabling the gateway to implement data processing functions.
  • Processor 150 may process data, information and/or processing results obtained from other devices or system components, and execute program instructions based on these data, information and/or processing results to perform one or more of the functions described in this specification.
  • the processor 150 may process the first video, the second video, and determine the push level.
  • processor 150 may include one or more processing engines (eg, single-chip processing engines or multi-chip processing engines).
  • the control system may include one or more processors 150 .
  • the plurality of processors 150 may include any one or more of the processor 150 disposed on the server 140 , the processor disposed on the smart device 110 , and the processor disposed on the terminal device 160 .
  • the terminal device 160 may be a terminal or software associated with the smart device 110 .
  • the terminal device 160 may be used by one or more users, and the one or more users may include users who directly use the service, or other related users.
  • users may include property owners, family members, security personnel, property personnel, and the like.
  • the user of the terminal device 160 may be a trusted user of the smart device 110 .
  • the user information of the user of the terminal device 160 may have a corresponding relationship with the terminal device 160 .
  • the terminal device 160 may be a mobile device 160-1, a tablet computer 160-2, a laptop computer 160-3, a desktop computer 160-4, a door lock indoor unit 160-5, a wearable device, etc.
  • the terminal device 160 can perform data (eg, video information or index information, etc.) interaction with the smart device 110 .
  • the terminal device 160 may serve as a data receiving device and a display terminal of the data receiver, for receiving and displaying the received data information.
  • the user can view relevant information collected by the smart device 110 or other security information through the terminal device 160 .
  • the user may also retrieve user information and/or item information related to the smart device 110 through the terminal device 160 .
  • the user can operate various components of the system based on the terminal device 160 .
  • the user can control the lock body of the smart device to lock or unlock based on the terminal device 160 .
  • Network 170 may include channels that provide for the exchange of information.
  • the smart device 110 , the first collection device 120 , the second collection device 130 , the server 140 , the processor 150 , the terminal device 160 and the storage device 180 can exchange information through the network 170 .
  • the server 140 may receive relevant information collected by the first collection device 120 through the network 170 .
  • network 170 may include one or more network access points.
  • network 170 may include wired or wireless network access points, such as base stations and/or network switching points 170-1, 170-2, ..., through which one or more components of the control system may connect to network 170 to exchange data and/or information.
  • Storage device 180 may be used to store data and/or instructions.
  • the storage device 180 may be used to store data and/or instructions obtained from, for example, the smart device 110, the first collection device 120, the second collection device 130, and the like.
  • the storage device 180 may store data and/or instructions for the server 140 to perform or use to perform the exemplary methods described in this specification.
  • the storage device 180 may store user information, relevant information collected by the smart device, and the like.
  • the storage device 180 may include one or a combination of mass storage, removable storage, volatile read-write storage, and read-only memory (ROM).
  • the storage device 180 may be implemented through the cloud platform described in this specification.
  • the cloud platform may include one or a combination of private cloud, public cloud, hybrid cloud, community cloud, distributed cloud, cross-cloud, multi-cloud, etc.
  • Fig. 2 is an exemplary block diagram of a control system according to some embodiments of the present specification.
  • the control system 2000 may include a smart device 203 .
  • the control system 2000 may further include a server 201 and a terminal device 202, and data interaction between the server 201, the terminal device 202 and the smart device 203 may be realized through a network.
  • the smart device 203 may include a collection device 2031 and a communication module 2032 .
  • the collection device 2031 is a device or structure capable of receiving or collecting information.
  • the information collected by the collection device may have various information forms.
  • the information collected by the collection device 2031 may include but not limited to digital information, video information, optical information, acoustic information, temperature information, mechanical or kinematic information, biometric information, user operation information, and the like.
  • biometric information may include, but not limited to, one or more of fingerprint information, finger vein information, palm print information, palm vein information, iris information, retina information, face information, and voiceprint information.
  • the acquisition device 2031 may receive trigger information.
  • the trigger information may include but not limited to operation information, human body detection information, abnormality detection information, and the like.
  • the collecting device 2031 can collect relevant information in the security area.
  • the security area may include one or more preset areas (eg, a first preset area and/or a second preset area).
  • the multiple preset areas may be different areas.
  • overlapping areas may or may not be included among the multiple preset areas.
  • the collection device 2031 may include, but not limited to, one or more of a camera device, a detection device, and an input device.
  • the camera device can acquire video information of a preset area.
  • the camera device may include, but not limited to, a normal camera, a high-definition camera, a visible light camera, an infrared camera, an optical flow camera, a night vision camera, a monocular camera, a binocular camera, and the like.
  • the camera device can determine whether a preset object exists in a preset area (for example, a management area, a first preset area, a second preset area, a third preset area, etc.) based on the acquired video information, and Whether the preset object in the preset area has changed.
  • the collection device may further include a supplementary light.
  • the detection device can acquire detection information of a preset area. In some embodiments, the detecting device can detect whether there is a preset object in the preset area, and whether the preset object in the preset area changes. In some embodiments, the detection device may include, but not limited to, human body infrared detectors, laser detectors, ultrasonic detectors, pressure sensors, sound detectors, shock detectors, temperature detectors, smoke detectors, and the like. In some embodiments, the smart device 203 may be provided with one or more detection devices, and different types of trigger information may be detected by different detection modules.
  • the input device can obtain user input information.
  • the input information may include but not limited to password information, operation instructions and so on.
  • the password information may include but not limited to one or more of fingerprint information, finger vein information, palm print information, palm vein information, iris information, retina information, face information, voiceprint information, digital password information, etc.
  • the operation instruction may include but not limited to query instruction, display instruction, edit instruction, delete instruction and so on.
  • the number of collection devices 1111 can be set to one or more than one.
  • the collection device 1111 may include a first collection device 120 and a second collection device 130 .
  • the first acquisition device 120 may be used to acquire relevant information (for example, the first video) of the first preset area.
  • the second collection device 130 may be used to obtain relevant information (for example, a second video) of the second preset area.
  • the first preset area and the second preset area may at least partially overlap to fully cover the entire security area.
  • the first preset area and the second preset area may be monitoring blind areas of each other.
  • different security levels may be set for the first preset area and the second preset area. For more details about the first preset area and the second preset area and the acquisition of relevant information of the first preset area and the second preset area, please refer to other parts of this specification (for example, FIG. 20 and its related contents). Relevant descriptions will not be repeated here.
  • the smart device 110 may use part or all of the information acquired by one or more collection devices 1111 as the trigger information of the smart device 110 .
  • the trigger information may include security information generated by the collecting device 1111 in various situations indicating the occurrence of a security event.
  • the smart device 110 may generate security information indicating that the password unlock event occurs.
  • the smart device 110 may generate security information indicating that an event of a person standing in front of the door has occurred.
  • the smart device 110 may control one or more collection devices 1111 to collect relevant information related to the trigger information.
  • the collection device 1111 for collecting relevant information and the collection device 1111 for obtaining trigger information may be the same collection device, or may be different collection devices.
  • the communication module 2032 may be a module for implementing data interaction between the smart device 110 and other devices. In some embodiments, the communication module 2032 may have a control function. In some embodiments, the communication module 2032 can control the exchange of information or instructions between the smart device 203 and the server 201 , the processor 150 , and the network 170 . In some embodiments, the communication module 2032 can control the working state of the collection device 2031 . In some embodiments, the communication module can realize the communication between the smart device 110 and the server 140, the processor 150, the terminal device 160 or the network 170 through the communication line. In some embodiments, the communication line can be a bus, such as UART (Universal Asynchronous Receiver/Transmitter), I2C (Inter-Integrated Circuit), USB (Universal Serial Bus, Universal Serial Bus) and other buses.
  • UART Universal Asynchronous Receiver/Transmitter
  • I2C Inter-Integrated Circuit
  • USB Universal Serial Bus
  • the smart device 203 may further include a lock body 2033 .
  • the lock body 2033 may be a device or structure that controls the smart device 110 to open or close the management area.
  • the unlocking mode of the lock body 2033 may include mechanical unlocking and non-mechanical unlocking.
  • the mechanical unlocking can be a way of unlocking by inserting a key into the lock hole and matching the shape of the key.
  • Non-mechanical unlocking can include but not limited to fingerprint unlocking, finger vein unlocking, palm vein unlocking, password unlocking, iris unlocking, face recognition unlocking, Bluetooth unlocking, NFC unlocking and other non-mechanical unlocking functions. One or more.
  • Fig. 3 is an exemplary flow chart of a control method according to some embodiments of this specification. As shown in FIG. 3 , the process 100 may include the following steps. In some embodiments, the process 100 may be executed by the control system in some embodiments of this specification.
  • Step 1010 based on the trigger information of the smart device, obtain relevant information collected by one or more collection devices of the smart device. In some embodiments, step 1010 may be performed by processor 150.
  • the relevant information may refer to information related to the security area of the smart device.
  • the relevant information may include video information or image information collected by a collection device (eg, a camera device).
  • the relevant information may include a first video in a first preset area and a second video in a second preset area.
  • the relevant information may include video information in the third preset area.
  • relevant information may include images of areas of a face.
  • the relevant information may include information related to preset objects in the management area of the smart device collected by the collection device (for example, a detection module).
  • the relevant information may include change information (for example, item access information) of preset objects in the management area.
  • the relevant information may include user access information to the management area.
  • the smart device can record relevant information collected by one or more collection devices. In some embodiments, the smart device may store relevant information in the storage device 180 for subsequent use.
  • Trigger information may refer to information that triggers at least some components (for example, one or more collection devices) of the smart device to start working.
  • the trigger information may come from outside the smart device.
  • the trigger information may also come from within the smart device.
  • the trigger information may include security information indicating a security event of the smart device.
  • the security information may include preset object detection information, face recognition information, access information, operation information, and the like.
  • the operational information may include control information for the smart device.
  • the smart device may be a smart lock, and the operation information may include unlocking information, locking information, and the like.
  • the unlocking information may include remote unlocking, local unlocking, biometric unlocking, password unlocking, voice unlocking, mechanical unlocking, App unlocking, associated device unlocking, and the like.
  • the preset object detection information is used to indicate that a preset object (eg, a person) enters a security area (eg, a management area, a first preset area or a second preset area) of the smart device.
  • the trigger information includes a user's request to access a management area associated with the smart device.
  • the collecting device of the smart device may acquire preset object information and/or preset object change information of the management area during the trusted user's visit to determine index information.
  • the trigger information includes operation information of the lock body of the smart device, and when the smart device detects the operation information of the lock body, the collection device may be triggered to collect relevant information.
  • the operation information on the lock body may include legal unlocking, illegal unlocking, manual unlocking, manual locking, successful unlocking, and unlocking failure.
  • the trigger information may also include a device linkage event.
  • the device linkage event may be that the running state of a certain component in the smart device meets the triggering condition for running one or more other components.
  • the trigger information may be acquired by one or more collection devices of the smart device.
  • the camera for example, a camera
  • the input device of the smart device may receive a user's access request, and the smart device may use the access request as trigger information.
  • the detection module for example, pressure sensor, vibration detector
  • the communication module of the smart device may receive a device linkage event, and generate trigger information based on the device linkage event.
  • one or more collection devices of the smart device may be triggered to collect relevant information of the security area.
  • Step 1020 process the relevant information and/or trigger information based on a preset algorithm, and control the smart device to perform corresponding operations.
  • step 1020 may be performed by processor 150 .
  • the corresponding operation may be an operation performed by the smart device and corresponding to the relevant information and/or trigger information.
  • the corresponding operation may also include setting the working mode of the communication module.
  • the control system can process the trigger information through a preset algorithm, and set the working mode of the communication module of the smart device based on the processing result.
  • the working mode of the communication module corresponds to the trigger information.
  • the corresponding operation may further include: controlling the working state of one or more collection devices based on the working mode.
  • the trigger information may include face recognition information
  • the corresponding operation may also include processing the face recognition information (for example, adjusting exposure weight) to obtain a high-quality face image.
  • the relevant information may include video information, and the corresponding operation may include: adding identification information to the video information based on the trigger information.
  • the identification information may indicate a security event relative to the video information.
  • the identification information may indicate the push level of the video information.
  • the relevant information may include the first video of the first preset area and/or the second video of the second preset area, and the corresponding operation may include: based on the first video and/or the second video, generating a comprehensive Monitor information.
  • the relevant information may include information about preset objects in the management area associated with the smart device, and the corresponding operation may include: determining index information based on change information of the preset objects in the management area.
  • the smart device includes a communication module and a camera module.
  • the camera module may be one or more camera devices in one or more acquisition devices.
  • the wake-up and shutdown of the camera module can be controlled by the communication module.
  • the camera module if the communication module does not receive the trigger information and does not need to collect relevant information, the camera module is in an idle state, and the communication module and the camera module can be controlled to shut down, such as controlling the communication module and the camera module to be powered off; if the trigger information is detected If the camera module is required to record video, the control system can wake up the communication module first, and then control the camera module to wake up through the communication module after the communication module is woken up, such as controlling the camera module to be powered on after the communication module is powered on.
  • the communication module and the camera module are turned off when the camera module is in an idle state, the power consumption can be greatly reduced, but when the camera module is required to record video, the camera module needs to be woken up after the communication module is woken up.
  • this process will cause the camera module to take too long from power-off to power-on, making the camera module The recording starts slowly, and some relevant information may be missed, which is not conducive to timely security control.
  • the communication module may include multiple working modes, and the power consumption of the multiple working modes is different.
  • the control system can process the trigger information based on a preset algorithm, set the working mode of the communication module, and make the communication module be in the working mode matching the trigger information.
  • the communication module can remain powered on, and the power consumption of the communication module can be reduced by setting different working modes of the communication module.
  • the communication module can control the working state of the camera module in the smart device based on the working mode corresponding to the trigger information.
  • the camera module can include one or more of the collection devices in the smart device (for example, a camera device), and the communication module can control the working status of one or more collection devices, thereby saving the energy of the camera module. consumption. For example, when the video information is not recorded, the camera module can be controlled to be turned off or in a dormant state. In some embodiments, another one or more of the collection devices can be turned on all the time to monitor the trigger information of the security area.
  • the smart device sends the trigger information to the communication module after detecting the trigger information, and the communication module remains powered on, so that the communication module can receive the trigger information, so that the communication module can save the cold start link when responding to the trigger information, directly
  • the working state of the camera module is controlled based on the working mode, and the time-consuming for the camera module to switch from one working state to another is shortened.
  • the communication module can omit the cold start link to wake up the camera module, shorten the time-consuming wake-up of the camera module, and speed up the recording start of the camera module.
  • the communication module can put itself in a working mode that matches the trigger information according to the trigger information, so that the working mode of the communication module matches the trigger information.
  • a low energy consumption working mode can be used to save energy. consumption, and can also improve the flexibility of mode selection.
  • a smart device may include a video lock or a door with a camera and smart lock.
  • the video lock can be equipped with a camera module on the lock, and while realizing the function of unlocking and opening the door, a visual function is added to form a video interaction with the door.
  • the environment where the video lock is located can be monitored through the camera module, such as collecting images, recording videos, etc. through the camera module, and it is also possible to live broadcast the pictures of the environment where the video lock is located.
  • the communication module in the video lock may include a WIFI communication module
  • the acquisition device in the video lock may include a camera module.
  • the video lock can also control the working state of the camera module while providing the camera module with a WIFI network through the WIFI communication module, such as controlling the wake-up and shutdown of the camera module.
  • the video lock can also combine the image information or video information acquired by the camera module with the state of the video lock (eg, unlocked state or locked state) to generate a security event.
  • Fig. 4 is an exemplary flow chart of a method for controlling a smart device according to some embodiments of this specification.
  • the smart device may include a communication module and a camera module (that is, a collection device), and the process 200 may be executed by the communication module.
  • the process 200 may include the following steps:
  • Step 201 the communication module in the smart device receives trigger information.
  • the communication module may be a module with a control function (such as power consumption control).
  • the smart device can control the wake-up and shutdown of the camera module through the communication module, such as controlling the power-on and power-off of the camera module through the communication module.
  • the communication module can be a SOC (System on Chip, system-on-chip) type communication module, and the SOC type communication module can include a control module and a communication module, and the control module can execute for controlling the smart device Instructions of the camera module, the communication module is used to provide network connection for the smart device, such as providing a wireless network for the smart device, so that the smart device can use the wireless network provided by the communication module to communicate.
  • SOC System on Chip, system-on-chip
  • the communication module remains in a powered state such that the communication module can receive trigger information without rebooting or waking up.
  • the control module in the communication module is in a powered state, and the control module can receive the trigger information when the trigger information is detected.
  • the trigger information can be used to adjust the working mode of the communication module, and on the other hand, it can also control the working state of the camera module in the smart device, such as controlling the camera module from shutting down to waking up.
  • the trigger information may include at least one of a person detection event and a device linkage event.
  • the person detection event and the device linkage event are used to indicate that a person enters the monitoring area of the smart device.
  • a person is detected in the monitoring area of the device; a device linkage event can be that a component in the smart device triggers the camera module, etc.
  • the trigger information includes someone passing by the door, someone staying in front of the door, successful unlocking, opening the door in armed mode, ringing the doorbell, multiple unlocking errors, picking the lock, unlocking under duress, etc.
  • Someone passing by the door and someone staying in front of the door can be personnel detection events, and other events are device linkage events.
  • the smart device may also include a detection module.
  • the detection module may include one or more of the acquisition devices.
  • the collection device included in the detection module may be different from the collection device included in the camera module.
  • the collection device included in the detection module may be partly the same as the collection device included in the camera module.
  • the collection devices included in the detection module and the collection devices included in the camera module may all be the same.
  • the trigger information may be detected by a detection module in the smart device.
  • the smart device may include multiple detection modules, and different types of trigger information may be detected by different detection modules.
  • the human detection event can be detected by the face detection module and/or PIR (infrared sensor) in the smart device, and the doorbell and door lock events can be detected by the door lock main board.
  • the trigger information detected by the detection module can be directly transmitted to the communication module, for example, the detection module of the trigger information and the communication module are connected and communicated by electrical connection or wireless, and the trigger information can be transmitted to the communication module by electrical connection or wireless.
  • the control module in the communication module when receiving the trigger information, the control module in the communication module is in a powered state, the communication module is in a power-off state, and the control module can receive the trigger information through an electrical connection.
  • the communication module can include at least two communication modules, and the network types and power consumption corresponding to the two communication modules are different.
  • the communication module includes a WIFI communication module and a Bluetooth communication module, and the Bluetooth communication module The power consumption is less than that of the WIFI communication module, the Bluetooth communication module is in a powered state, and the WIFI communication module is in a power-off state, and the control module can receive trigger information through the Bluetooth communication module.
  • Step 202 in response to the trigger information, the communication module is in a working mode matching the trigger information, and controls the working state of the camera module in the smart device through the communication module.
  • control system of the camera module can be embedded in the control module of the communication module to realize the control of the camera module through the communication module, such as controlling the wake-up and shutdown of the camera module through the communication module, and can also control the camera module in response to trigger information Record video and more.
  • the communication module can be a SOC-type WIFI communication module, which can control the camera module through the WIFI communication module, and can also provide a WIFI network for the camera module through the WIFI communication module, upload the video recorded by the camera module through the WIFI network, collected images, etc.
  • the working modes of the communication module include a first working mode, a second working mode and a third working mode.
  • the power consumption relationship of these three working modes is: the power consumption of the communication module in the first working mode is greater than the power consumption of the communication module in the second working mode, but less than the power consumption of the communication module in the third working mode lower power consumption.
  • the communication module in the first working mode to the third working mode, is in a powered state, such as the control module in the communication module is in a powered state, and for example, the control module and the communication module in the communication module
  • the Bluetooth communication module is in a powered state.
  • the communication module in the first working mode, is in the networked state (that is, the state of communicating with the network), and in the second working mode, the communication module is in the disconnected state (that is, the state of interrupting the communication with the network),
  • the communication module in the third working mode, is also in the networking state, but in the third working mode, the networking duration of the communication module (ie, the time for maintaining communication with the network) is longer than the networking duration of the communication module in the first working mode.
  • the networking time of the communication module is related to the working time of the camera module, and the working time of the camera module refers to the time it takes for the camera module to record and collect images.
  • the online duration of the communication module in the third working mode is 7*24 hours (ie, 7 days), and the networking time of the communication module in the first working mode is 24 hours (ie, 1 day).
  • the time for the communication module to maintain communication with the network in the third working mode is longer than the time for the communication module to maintain communication with the network in the first working mode, so that the video recorded by the camera module and the image collected can be uploaded through the communication module.
  • the communication module is in a matching working mode in response to the trigger information and the feasible ways of controlling the working state of the camera module include but are not limited to the following ways:
  • one way of controlling the working state of the camera module is that the communication module is in the first working mode in response to the trigger information, and wakes up the camera module through the communication module; wherein the communication module is in the networking state in the first working mode middle.
  • the communication module can provide a network to the camera module when it is in the first working mode; if the communication module has already started the first working mode, the communication module will remain in the first working mode; if the communication module is in the second working mode mode, the communication module switches from the second working mode to the first working mode.
  • the communication module in response to the trigger information, if the communication module is in the third working mode, the communication module remains in the third working mode according to preset rules, and does not perform mode switching, because the communication module is in the third working mode is in a networked state.
  • the preset rule may be that when the communication module is in the third working mode, if trigger information is received, the communication module maintains the working mode unchanged, that is, it is still in the third working mode.
  • the communication module when the communication module switches from the second working mode to the first working mode, the communication module actively establishes a network connection, such as actively connecting to a router, and establishes a network connection (such as a wireless connection) through the router.
  • the communication module establishes a network connection and wakes up the camera module at the same time, or wakes up the camera module first, and the communication module establishes a network connection during the process of waking up the camera module.
  • the wakeup of the camera module is earlier than the network connection of the communication module, the camera module is in Start video recording after waking up.
  • the communication module may have completed the network connection establishment, and the video recorded by the camera module can also be uploaded in time.
  • the sequence of establishing the network connection of the communication module and waking up the camera module is not limited in this embodiment.
  • Feasible ways to wake up the camera module include, but are not limited to, controlling the start of the camera module through the communication module, such as controlling the camera module to be powered on through the communication module, or controlling the camera module to perform state switching through the communication module, such as switching from a low power consumption state or a sleep state to the working state, which indicates that the camera module can start recording or collecting images.
  • the communication module is implemented in the power-on state. Compared with the communication module that is switched from the power-off state to the power-on state and then implemented, the power-on cold start link is omitted and the wake-up time is shortened.
  • the time consumption of the camera module is used to speed up the recording start of the camera module.
  • another way of controlling the working state of the camera module may be that, in response to the closing command of the camera module, the communication module maintains the first working mode and controls the camera module to be in the wake-up state; after the first preset After a long period of time, the communication module switches from the first working mode to the second working mode, and turns off the camera module.
  • the first preset time period begins counting when the closing command is received.
  • the shutdown instruction can also be used to instruct the camera module to switch states.
  • the communication module After receiving the shutdown instruction of the camera module, the communication module will not immediately control the shutdown of the camera module, but will keep the camera module in the wake-up state. Turn off the camera module after the first preset time period. Controlling the camera module in this way is mainly based on the following scenario considerations:
  • the detection module in the smart device detects the trigger information again. After closing, the communication module receives trigger information. For these two scenarios, the communication module will wake up the camera module immediately after turning off the camera module, so that the state switching of the camera module is frequent, which will increase power consumption to a certain extent.
  • the camera module is turned off after the first preset time period after the instruction.
  • the value of the first preset time period can be set according to actual needs, which is not limited by the embodiment of this specification.
  • the feasible ways to turn off the camera module include, but are not limited to, controlling the camera module to be powered off through the communication module; or, controlling the camera module to be in a low power consumption state through the communication module; or, controlling the camera module to be in sleep through the communication module state.
  • another way of controlling the working state of the camera module is that, in response to the mode switching instruction, the communication module enters the working mode pointed to by the mode switching instruction; upon receiving at least one of a person detection event and a device linkage event After the type event, the communication module controls the working state of the camera module according to at least one type event in the person detection event and the device linkage event.
  • a parameter setting interface can be displayed on the terminal device, and the parameter setting interface can set the working mode of the communication module, the identification of the communication module and the parameters of other modules in the smart device.
  • the parameter setting interface can set the resolution of the camera module in the smart device, trigger information for triggering the camera module to record video, and the like.
  • the user can select the working mode of the communication module in the parameter setting interface, and then send the mode switching command to the smart device through the terminal device and the server, and the mode switching command can carry the working mode of the communication module.
  • the communication mode enters the working mode pointed to by the mode switching instruction.
  • the communication module receives the trigger information, it can also adjust its own working mode according to the trigger information.
  • the way of adjusting the working mode can refer to the description of the above two ways, and will not be described in detail here.
  • the communication module can control the working state of the camera module.
  • the communication module wakes up the camera module.
  • the communication module receives the shutdown instruction of the camera module, the communication module can control the camera module to be in a wake-up state, and turn off the camera module after a first preset time period. For example, the camera module is controlled to be powered off through the communication module; or, the camera module is controlled to be in a low power consumption state through the communication module; or, the camera module is controlled to be in a sleep state through the communication module.
  • the detection modules such as the door lock main board and the PIR (infrared sensor) in the video lock can detect whether someone enters the video lock.
  • the door lock main board can detect unlocking events and doorbell events (device linkage events), such as the door lock main board detects successful unlocking, opening the door in arming mode, pressing the doorbell, multiple unlocking errors, picking locks, and duress unlock and so on.
  • Personnel detection events such as people passing by the door and people staying in front of the door can be detected through the PIR.
  • the trigger information detected by the door lock main board and the PIR can be transmitted to the communication module through an electrical connection or wirelessly.
  • the control module in the communication module is in a powered state, and the control module receives the trigger information through an electrical connection.
  • the communication module switches from the second working mode of power-on and disconnection to the first working mode of power-on and networking, and controls the camera module to start, such as controlling the camera module to be powered on, and recording video through the camera module.
  • the communication module provides the camera module with a wireless network, and uploads the video recorded by the camera module through the wireless network.
  • the communication module is in the third working mode (such as a 7*24-hour networking working mode) when responding to the trigger information, the communication module can be maintained in the third working mode, and the camera module can be controlled to be powered on.
  • the camera module After the camera module finishes video recording, the camera module enters an idle state. In this case, the possibility of the smart device continuing to use the communication module and the camera module will also be reduced.
  • the communication module can disconnect the network connection but the communication module is not powered off. That is, entering the second working mode, the camera module may be powered off or in a low power consumption state or in a dormant state, so as to reduce the power consumption of the smart device.
  • the reason why the communication module is disconnected from the network without interruption of power (disconnected from the network but not powered off) is because the power-off and power-on of the camera module is controlled by the communication module. If both the communication module and the camera module are powered off, the communication module and the camera module The process from power-off to power-on is that the communication module is first powered on and then the camera module is powered on. The communication module needs to go through a cold start process. The cold start process takes a long time. It also takes a long time from power-on to power-on. In view of this problem, the communication module in this embodiment is disconnected from the network without continuous power supply.
  • the power consumption of the smart device will be greater than the power consumption of the communication module when the network is disconnected and powered off, when the camera module needs to be powered on, the communication module saves The cold start process shortens the time-consuming experience of the camera module from power-off to power-on.
  • the camera module when the communication module is in the third working mode, can also be powered on all the time, or the camera module can decide whether to power on or power off according to its own state. For example, when the camera module is in an idle state, it can send an interrupt Power-on request, the communication module controls the power-off of the camera module; if trigger information or video control instructions are detected, the communication module can control the power-on of the camera module.
  • the video control instruction may be a live broadcast instruction, a remote host viewing instruction, a video call instruction, and the like.
  • the communication module When the communication module is in the third working mode, it can receive the video control command sent by the terminal device; respond to the video control command, control the camera module to record video, and transmit the video to the terminal device through the communication module.
  • the first working mode can be regarded as a standard mode
  • the third working mode can be regarded as a real-time viewing mode.
  • the communication module closes the networking function by default but is not powered off, that is, in the standard mode, the communication module By default, it does not keep connected with the router, so that the standby power consumption is at the lowest; if the communication module receives trigger information in standard mode, such as when someone passes by the smart device, the communication module actively connects to the router and controls the camera module to be powered on to turn the camera The video recorded by the module is pushed and uploaded remotely, so that in the standard mode, the power consumption of the smart device is low and the battery life is long, but the user cannot actively watch the live broadcast (because the camera module is powered off and does not record the video), and can only be viewed after the trigger information wakes up.
  • the working time of the camera module after the trigger information can also be set, so that the live broadcast can be actively watched within a period of time after being woken up at the trigger time.
  • the communication module will always be connected to the router, which can realize but not limited to 7*24 hours to view the live broadcast at any time, making the power consumption of the smart device high and the battery life short.
  • the process of viewing the live broadcast at any time 7*24 hours may include: in some embodiments, if the working mode of the communication module is the third working mode, receiving the live broadcast instruction sent by the terminal device; responding to the live broadcast instruction, controlling The camera module records video; the video is transmitted to the terminal device through the communication module.
  • the terminal device may provide a live broadcast entry in the parameter setting interface, and if an activation operation for the live broadcast entry is detected (for example, the user clicks on the live broadcast entry), the terminal device sends a live broadcast instruction to the smart device.
  • the communication module triggers the camera module to record video after receiving the live instruction. If the camera module is powered off when receiving the live instruction, the communication module first controls the camera module to be powered on and then instructs the camera module to record video.
  • the camera module can record video in this case, and the live broadcast entry in the corresponding terminal device can be in an active state , that is, the live broadcast entry can be triggered, so that when the working mode of the communication module is the first working mode but the communication module has established a network connection, the real-time video can also be remotely viewed in the form of live broadcast.
  • the real-time video can also be remotely viewed in the form of live broadcast.
  • it can also transmit voice data in full-duplex mode to achieve seamless intercom, so that remote calls are more like face-to-face communication.
  • the user can control the communication module to switch between these two working modes through the terminal device.
  • the working mode switching can be completed through the parameter setting interface in the terminal device.
  • the communication module in the smart device receives the trigger information; in response to the trigger information, the communication module is in a working mode matching the trigger information, and controls the working state of the camera module in the smart device through the communication module.
  • the communication module can receive the trigger information, indicating that the communication module is in the power state, then the communication module can save the cold start link when responding to the trigger information, directly control the working state of the camera module, and shorten the switching time of the camera module from one working state to Time-consuming for another working state.
  • the communication module can save the cold start link to wake up the camera module, shorten the time-consuming of waking up the camera module, and speed up the recording start of the camera module.
  • the communication module can put itself in a working mode matching the trigger information according to the trigger information, so that the working mode of the communication module matches the trigger information, thereby improving the flexibility of mode selection.
  • Fig. 5 is an exemplary flow chart of a method for controlling a smart device according to other embodiments of this specification, which shows another optional process 300 of the method for controlling a smart device provided by some embodiments of this specification, which may be Include the following steps:
  • Step 301 the communication module in the smart device receives trigger information.
  • Step 302 in response to the trigger information, the communication module is in a working mode matching the trigger information, and controls the working state of the camera module in the smart device through the communication module.
  • Step 303 after the camera module wakes up, record the video in the monitoring area of the smart device through the camera module.
  • the recording duration of the camera module can be determined according to the trigger information. For example, the recording duration corresponding to each type of trigger information and the maximum recording duration are preset, and after receiving the trigger information, a matching recording duration is found from the preset recording duration, and then the camera module is controlled to record at least a video of the matching recording duration.
  • Step 304 obtaining a human figure detection result obtained by performing human figure detection based on the video.
  • the purpose of the human figure detection is to detect whether there is a human figure in the video. If there is a human figure, it means that there is a person existing/staying in the monitoring area, and the camera module continues to record the video of the monitoring area, and monitors the behavior of the personnel in the monitoring area through the video.
  • Humanoid detection can be performed by a smart device or by a server, a terminal device, etc. that communicate with the smart device. Ways of humanoid detection based on video include but are not limited to the following ways:
  • the human shape detection method may include, during the time period recorded by the camera module, performing human shape detection at preset intervals.
  • the human shape detection method may include starting the human shape detection after recording for a period of time through the camera module, for example, the human shape detection may be performed every preset time when the human shape detection is started, or the human shape detection may be continuously performed.
  • the human shape detection method may include, during the recording time period of the camera module, continuously performing human shape detection. The value of the preset time when performing human figure detection is not limited. Continuously performing human figure detection may be to extract another frame of image from the video for human figure detection after completing one human figure detection, or to extract another frame of image from the video during the process of performing human figure detection on one frame of image.
  • the image in the video may change during the recording of the video by the camera module. Continuously extracting at least one frame of image from the video can extract different images from the video to perform human shape detection on different images, and perform human shape detection.
  • the detected multi-frame images can change as the recording time elapses, so that when the environment where the smart device is located changes, it is possible to perform humanoid detection on the images in the changed environment.
  • the human figure detection can be to use the image recognition algorithm to detect whether there is a human figure in the image, because the image is obtained from the video corresponding to the monitoring area of the smart device, and whether there is a human figure in the monitoring area can be determined by detecting whether there is a human figure in the image, so as to determine Whether the trigger information is caused by a person, to avoid false triggering of the camera module.
  • PIR is prone to be falsely triggered by changes in ambient temperature, and in some situations or environments the human body can only be detected when it moves a lot. If there is no one in the monitoring area of the smart device but the temperature change in the monitoring area of the smart device causes the PIR to detect trigger information, the camera module will be activated and start recording video. For the video recorded by the camera module, the smart device or server can extract multiple frames of images from the video, and perform human figure detection on each frame of the extracted image. If no human figure is detected from the multiple frames of images, it is determined that the PIR falsely triggers the camera module. , thereby improving the accuracy of triggering by filtering false triggers of humanoid detection as trigger information.
  • Step 305 if the human figure detection result indicates that no human figure is detected from the video within the second preset time period, control the camera module to end the video recording.
  • the second preset duration is for extracting multiple frames of images from the video to perform human figure detection on the multiple frames of images, and the value of the second preset duration is not limited in this embodiment of this specification.
  • the camera module can be controlled to end the video recording, reducing the time of invalid work of the camera module, thereby reducing power consumption.
  • the method for controlling a smart device shown in FIG. 4 above may further include the following steps:
  • Step 306 if the human figure detection result indicates that a human figure is detected from the video within the second preset time period, control the camera module to extend the recording time period.
  • Step 307 if the recording time of the camera module is extended by a third preset time, and no person detection event is received within the third preset time, control the camera module to end the video recording.
  • the trigger information may be missed to trigger the camera module.
  • the PIR cannot detect that someone is standing in front of the door, so that the PIR will not generate trigger information, so that the camera module will miss this recording. Detection can be used as a supplement to trigger information to further improve accuracy.
  • controlling the camera module to extend the recording duration may control the camera module to extend a certain duration, for example, 5 seconds.
  • a certain duration for example, 5 seconds.
  • the camera module in the process of controlling the camera module to extend the recording time, it is also possible to detect whether there is a person detection event, if the recording time of the camera module is extended by the third preset time, and no person is detected within the third preset time When an event is detected, the camera module is controlled to end the recording; if a person detection event is detected during the recording duration of the camera module extending to the third preset duration, the camera module is controlled to continue recording.
  • humanoid detection is introduced and the recording duration is extended to solve the problem of discontinuous video slices. The following is an example to illustrate:
  • PIR can detect large movements of the human body, and the PIR detection distance is limited; every time PIR is triggered, the video recording time will be extended by 5 seconds. If the person does not move or is outside the detection distance, PIR will not generate trigger information, and the corresponding camera module will stop recording after recording for 5 seconds. Trigger information is generated, and correspondingly, the recording will be restarted after the recording is stopped, forming two recordings that are very close to each other. If humanoid detection is used as a supplement, as long as a humanoid is detected, it will keep recording to avoid interruption of video recording, thus solving the problem of discontinuous video recording.
  • the purpose of introducing the third preset duration to detect whether there is trigger information is to extend the recording duration and control the duration of the video not to be too long, because no person detection event is detected within a period of time, indicating that the monitoring area of the smart device has occurred The possibility of change is relatively small, and the video content will not change greatly during this period.
  • continuing to record may also record some unnecessary video content, so within the third preset duration, if The video recording of the camera module may end when no person detection event is detected.
  • the value of the third preset duration is not limited.
  • the camera module can send a shutdown command to the communication module, and the communication module responds to the shutdown command and controls the camera module to be in a wake-up state; after the first preset duration, the camera module is turned off, wherein the first preset duration is greater than The closing command is received as the start time.
  • the communication module When the communication module is in the first working mode in response to the shutdown instruction, the communication module can be maintained in the first working mode; after a first preset time period, the communication module switches from the first working mode to the second working mode.
  • Fig. 6 is an exemplary flow chart of a method for controlling a smart device according to other embodiments of this specification, which shows another optional process 400 of a method for controlling a smart device provided by some embodiments of this specification, which may be Include the following steps:
  • Step 401 the communication module in the smart device receives trigger information.
  • Step 402 in response to the trigger information, the communication module is in a working mode matching the trigger information, and controls the working state of the camera module in the smart device through the communication module.
  • Step 403 receiving a setting instruction for the smart device.
  • the setting instruction can be generated when the user specifies the working parameters of the smart device, and the working parameters can be set through the parameter setting interface displayed on the terminal device.
  • the setting instruction can be used to set the working parameters of the camera module; and/or, the setting instruction can be used to set the linkage condition and linkage operation of the smart device.
  • the terminal device can display a parameter setting interface.
  • the parameter setting interface can set the working mode of the communication module, the identification of the communication module and the parameters of other modules in the smart device.
  • the parameter setting interface can set the parameters of the camera module in the smart device. Resolution, detection sensitivity (such as sensitivity to trigger information), trigger information that triggers the camera module to record video; another example is setting the linkage between smart devices and other devices, etc. For example, set the trigger distance value corresponding to PIR, trigger the camera module when PIR detects someone passing by, trigger the camera module after detecting the door lock event, etc.
  • the trigger distance value is the distance used by PIR to detect whether there is a person in the monitoring area Value, that is, the monitoring range of PIR is a circular area with PIR as the center and the trigger distance as the radius.
  • the working parameters set by the user through the parameter setting interface can be carried in the setting instruction, and the terminal device sends the setting instruction to the smart device to control the smart device to update the working parameters.
  • setting the linkage between the smart device and other devices can be setting a linkage device related to the smart device, such as setting the speaker as a linkage device related to the smart device, and the linkage of the speaker can be when an event is detected, Turn on the speaker if voice communication is detected.
  • setting the linkage between the smart device and other devices can also set an event for triggering the work of other devices. For different smart devices, the events for triggering the work of other devices can be different. The following takes the smart device as a video lock as an example to illustrate:
  • the video lock supports linkage with a screen device (electronic device with a display screen) when the doorbell is pressed, and the screen in front of the door (such as video) is displayed on the speaker/TV in real time; it supports reporting trigger information and generating information generated by itself or other devices.
  • Linkage (if condition); supports the execution of linkage (then execution) in response to trigger information reported by other devices.
  • the if linkage conditions can include: 1) Someone passes by the door; 2) Someone stays in front of the door; 3) The power of the camera is lower than 10%; 4) A face is recognized in the video; 5) Someone rings the doorbell; 6) The door is locked be pried;
  • performing linkage operations may include: 1) recording and uploading video; 2) remotely calling the camera module in the smart device.
  • the if linkage condition and then linkage operation can be matched by the user in the parameter setting interface, that is, the user can select one or more conditions from the if condition and select one or more linkages from the then function, the smart device executes according to the user's choice, of course, the user can also manually input the if condition and then linkage function in the parameter setting interface, this embodiment does not limit the above if condition and then linkage function.
  • Step 404 responding to the setting instruction, controlling the smart device to perform synchronous update based on the settings in the setting instruction.
  • the working parameters carried by the setting instruction are used to set the identification of the communication module, such as the network identification of the WIFI module.
  • the network identification of the WIFI module is updated to the network identification carried in the setting instruction.
  • the communication module of the smart device when the terminal device sends a setting instruction to the smart device, the communication module of the smart device may be in any one of the working modes from the first working mode to the third working mode, and the communication module in the second working mode If the network connection is disconnected, the network connection of the communication module is valid in the first working mode and the third working mode. When the network connection of the communication module is disconnected and the network connection is valid, there will be differences in the response to the setting command, and the differences are as follows:
  • the smart device when the communication module is in the first working mode or the third working mode, the smart device is updated synchronously. In some embodiments, when the communication module is in the second working mode, the smart device receives the trigger information or responds to the setting instruction every fourth preset time interval.
  • the reason for the difference is that if the communication module disconnects the network connection when sending the setting command, the setting command sent by the terminal device is temporarily stored in the server, and after the communication module re-establishes the network connection, the server sends the setting command to the smart device or The server actively obtains the setting instruction, and this process will cause a delay in the response of the setting instruction, which means that the synchronization of working parameters between the terminal device and the smart device is delayed. If the network connection of the communication module is valid when the setting command is sent, the setting command sent by the terminal device can be sent to the smart device immediately, and the smart device can update the working parameters synchronously, that is, when the terminal device updates the working parameters of the smart device, in the smart device The operating parameters can also be updated immediately.
  • the communication module re-establishes the network connection may be to re-establish a network connection after detecting the trigger information or every fourth preset time interval, and the purpose of re-establishing the network connection every fourth preset time length is to update the working parameters synchronously, wherein
  • the value of the fourth preset duration is not limited in this embodiment.
  • the above-mentioned method for controlling a smart device may also include: adaptively adjusting the trigger distance value of the PIR, and the adaptive adjustment of the trigger distance value of the PIR may be implemented by a camera module or a communication module in the smart device, or may be implemented by a smart device
  • the main control module is implemented, and its adaptive adjustment of the trigger distance value of the PIR is implemented after the user enables this function.
  • the communication module obtains multiple historical trigger distance values and human-shaped detection results of each historical trigger distance value, and determines the target trigger distance value according to the multiple historical trigger distance values and the human-shaped detection results of the historical trigger distance values.
  • the human figure detection result is used to indicate whether there is a human figure in the video recorded by the camera module, and when the video recorded by the camera module can be a person detection event sent by the PIR, it indicates that the human figure detection result has a certain relationship with the PIR; if the human figure detection result indicates that a human figure is detected, It means that the person detection event sent by PIR is an accurate event, which also means that PIR can detect people accurately in time, and its historical trigger distance value is a relatively accurate distance value; The human detection event is an error event, and its historical trigger distance value is a distance value with low accuracy. Therefore, the smart device can adjust the historical trigger distance value according to the humanoid detection result corresponding to the historical trigger distance value to obtain the target trigger distance value. distance value. For example, select the historical trigger distance values corresponding to multiple humanoid detection results with the highest accuracy, and perform operations on these historical trigger distance values (for example, calculate the mean value) to obtain the target trigger distance value.
  • Another way to determine the target trigger distance value is to obtain multiple historical trigger distance values or humanoid detection results for each historical trigger distance value, and determine based on multiple historical trigger distance values or humanoid detection results for each historical trigger distance value Target trigger distance value.
  • the communication module processes at least a plurality of historical trigger distance values according to a preset adjustment rule to obtain a target trigger distance value.
  • a preset adjustment rule is to adjust the current trigger distance value according to the variation trend of multiple historical trigger distance values to obtain a target trigger distance value. If the change trend of multiple historical trigger distance values indicates that the trigger distance value presents a gradually decreasing trend, the current trigger distance value is decreased or increased.
  • Another example of another preset adjustment rule is that the communication module obtains calculation results of multiple historical trigger distance values, and the calculation result is a target trigger distance value; for example, the average value of multiple historical trigger distance values is obtained.
  • the communication module can obtain the target trigger distance value according to the detection accuracy indicated by the human figure detection result. If the human figure detection result indicates that a human figure is detected, its detection accuracy High; if the human figure detection result indicates that no human figure is detected, its detection accuracy is low. The communication module can obtain the trigger distance value of the target according to the historical trigger distance value corresponding to the human figure detection result with high detection accuracy.
  • this specification provides an exemplary structure of a communication module, and this device can be specifically applied to a smart device.
  • the communication module includes: a control module and a communication module.
  • the control module is used to receive trigger information; in response to the trigger information, the communication module is in a working mode matching the trigger information, and controls the working state of the camera module in the smart device through the communication module.
  • the communication module is used to provide the network to the smart device.
  • the communication module is in the first working mode
  • the control module is configured to respond to the trigger information and wake up the camera module through the communication module of the communication module; wherein the communication module is in the networking state in the first working mode.
  • control module responds to the trigger information, and if the communication module is in the second working mode, the communication module switches from the second working mode to the first working mode, wherein the communication module is in a disconnected state in the second working mode Among them, the power consumption of the communication module in the first working mode is greater than the power consumption of the communication module in the second working mode.
  • the communication module in response to the trigger information, if the communication module is in the third working mode, the communication module remains in the third working mode, wherein in the third working mode, the communication module is in a networking state, and communicates in the third working mode
  • the networking duration of the module is longer than the networking duration of the communication module in the first working mode.
  • waking up the camera module through the communication module includes: controlling the camera module to start through the communication module.
  • control module is configured to respond to the shutdown instruction of the camera module, maintain the communication module in the first working mode and control the camera module to be in the wake-up state;
  • the first working mode is switched to the second working mode, and the camera module is turned off, and the first preset time period starts when the closing command is received.
  • turning off the camera module includes: controlling the camera module to be powered off through the communication module; or controlling the camera module to be in a low power consumption state through the communication module; or controlling the camera module to be in a sleep state through the communication module.
  • control module is configured to respond to the mode switching instruction, and the communication module enters the working mode pointed to by the mode switching instruction; after receiving at least one type of event in the personnel detection event and the device linkage event, the communication module The working state of the camera module is controlled according to at least one type of event in a person detection event and a device linkage event.
  • the control module obtains the human figure detection result based on the human figure detection based on the video; If no human figure is detected from the video within the duration, the camera module is controlled to end the video recording.
  • the camera module if the human figure detection result indicates that a human figure is detected from the video within the second preset duration, the camera module is controlled to extend the recording duration; if the recording duration of the camera module is extended by a third preset duration, and within the third preset If no person detection event is received within the set period of time, the camera module is controlled to end the video recording.
  • control module receives a setting instruction for the smart device; in response to the setting instruction, the smart device is controlled to perform synchronous update based on the setting in the setting instruction; when the communication module is in the first working mode or the third working mode, the smart The devices are updated synchronously; when the communication module is in the second working mode, the smart device receives the trigger information or responds to the setting instruction every fourth preset time interval.
  • the setting instruction is for the camera module, and the setting instruction is used to set the working parameters of the camera module; and/or, the setting instruction is used to set the linkage condition and linkage operation of the smart device.
  • control module obtains multiple historical trigger distance values and/or humanoid detection results of each historical trigger distance value, and determines the Target trigger distance value.
  • the collection device can collect relevant information of the security area, but the user often needs to check the specific content of the information to know the security event reflected in the information. For example, the user needs to check the specific content of the image or video collected by the camera device, so as to know the security incidents occurring in the security area.
  • the trigger information may include security information
  • the related information may include video information
  • controlling the smart device to perform corresponding operations may include: adding identification information to the related information based on the trigger information.
  • the identification information may include a security identification.
  • controlling the smart device to perform corresponding operations may include: adding a security mark to the collected information based on the security information. After the smart device obtains the security information used to indicate the security event, it can control the collection device to collect relevant information of the security area; add a security mark to the information collected by the collection device, and the security mark matches the security event indicated by the security information.
  • adding a security mark that matches the security event to the information collected by the collection device the user can directly understand the security event reflected in the corresponding security information by virtue of the security mark, without having to check the specific content of the collected relevant information, which is convenient and fast. Since the user understands the security events reflected in the security information faster, corresponding security measures (such as calling the police and asking for help) can be taken faster, thereby realizing security protection in a more timely and effective manner.
  • Fig. 7 is an exemplary flowchart of a method for processing security information according to some embodiments of this specification. As shown in FIG. 7, method 500 may include:
  • Step 501 after obtaining the security information indicating the occurrence of a security event, control the collection device to collect information on the security area.
  • the collection device in some embodiments of this specification may include a device for collecting at least one type of information in the security area.
  • a device for collecting at least one type of information in the security area there are many kinds of relevant information in the security area, such as: image information (such as visible image, infrared image, etc.), sound information, vibration information, electromagnetic signal information, light intensity information, air index information, temperature information, humidity information etc.
  • image information such as visible image, infrared image, etc.
  • sound information such as visible image, infrared image, etc.
  • vibration information such as visible image, infrared image, etc.
  • electromagnetic signal information such as light intensity information, air index information, temperature information, humidity information etc.
  • the method shown in FIG. 7 can be applied to a smart device, a terminal device or a server. In some embodiments, the method shown in FIG. 7 may be executed by a processor of a smart device, a terminal device, or a server.
  • the security event may be an event related to security protection.
  • security events are provided as examples below: someone passes by the door, someone stays in front of the door, unlocking is successful, the door is opened in the armed mode, someone rings the doorbell, Wrong opening of the lock, wrong opening of the lock multiple times, picking the lock, opening the lock under duress, strangers entering the home, fire.
  • the above-mentioned security events can be detected by equipment such as camera devices, electronic locks, and smoke detectors.
  • equipment such as camera devices, electronic locks, and smoke detectors.
  • some embodiments of this specification can use a camera to take an image outside the door. When there is a person in the image and the person stays for a shorter time than the preset value, it can be determined that a security event of "someone is passing by the door" has occurred.
  • a camera device can be used to capture an image outside the door. When there is a person in the image and the person's staying time is not shorter than a preset value, it can be determined that a security event of "someone stays in front of the door" has occurred.
  • the camera device may determine whether there is a person in the image through detection techniques such as human shape detection and/or face detection. In some embodiments, the camera device can also determine whether the person in the image is a stranger or a person with legal unlocking authority (such as a house owner or a trusted user).
  • detection techniques such as human shape detection and/or face detection.
  • the camera device can also determine whether the person in the image is a stranger or a person with legal unlocking authority (such as a house owner or a trusted user).
  • Some embodiments of this specification can monitor the unlocking process through the electronic lock to determine whether a security event related to unlocking occurs. In some embodiments, when someone successfully unlocks the lock through the unlocking function supported by the electronic lock, it may be determined that a security event of "unlocking successfully" occurs. In some embodiments, when someone fails to unlock the lock through the unlocking function supported by the electronic lock, it may be determined that a security event of "unlocking error” occurs. In some embodiments, when someone fails to unlock N times through the unlocking function supported by the electronic lock within a preset period of time, it can be determined that a security event of "multiple unlocking errors" occurs, where N is a natural number and not less than 2.
  • a lockpicking parameter such as a vibration amplitude greater than a preset amplitude or a lock cylinder rotation force greater than a preset value, etc.
  • a lockpicking parameter such as a vibration amplitude greater than a preset amplitude or a lock cylinder rotation force greater than a preset value, etc.
  • the electronic lock detects that the lock is unlocked by an unlocking key corresponding to "unlocking by duress” it may be determined that a security event of "unlocking by duress” occurs.
  • the unlocking key corresponding to "unlocking by duress” may be a group of passwords set by the user to reflect the event of "unlocking by duress", or may be a fingerprint or finger vein of a certain finger of the user.
  • the electronic lock when the electronic lock integrates the doorbell function, when someone presses the doorbell, the electronic lock can detect that the doorbell is pressed, so as to determine that a security event of "someone rings the doorbell" occurs. In some embodiments, when the electronic lock detects that someone unlocks the lock through the mechanical unlocking function in the armed mode or someone unlocks it from inside the house in the armed mode, it can be determined that the security event of "opening the door in the armed mode" occurs. Among them, the armed mode is a security protection mode that can be adopted when there is no one at home.
  • Some embodiments of this specification can detect smoke through a smoke detector.
  • the smoke detector can send security information indicating the occurrence of "fire” to the electronic device performing the method shown in FIG. 6 .
  • Some embodiments of this specification can monitor indoor images through indoor cameras, and when a stranger is found indoors, it can be determined that a security event of "a stranger entering the home" has occurred.
  • the security information used to indicate the occurrence of a security event may include: an identifier of the security event (such as a name of the security event or a symbol representing the security event).
  • the security information may also include at least one of various information such as the occurrence time of the security event, the event content of the security event, and the event type of the security event.
  • the security area in some embodiments of this specification may be a target area that requires security protection, such as areas outside the door and inside the house.
  • Some embodiments of this specification can enable the collection device to collect information on the security area by setting and adjusting the placement position, orientation, collection parameters, etc. of the collection device.
  • the security information indicating the occurrence of a security event after obtaining the security information indicating the occurrence of a security event, it may be determined that a corresponding security event has occurred, and at this time information may be collected on the security area to obtain information on the security area.
  • Step 502 Add a security mark to the information collected by the collection device, and the security mark matches the security event indicated by the security information.
  • the security identification may be an identification indicating a correspondence between relevant information and security information.
  • the security identifier may also indicate the correspondence between security information and security events.
  • the security identification added to the information collected by the collection device may include an identification of a security event.
  • the identification of the security event "someone is passing by the door" can be: a person is passing by.
  • the electronic device executing the security information processing method provided in some embodiments of this specification may also save the information added with the security identification in a local of the electronic device or in a storage device of the control system.
  • An electronic device executing the security information processing method provided in some embodiments of this specification may also send the information with the security identification added to other devices, such as uploading to a server. If the transmission network is interrupted during the process of sending information, the transmission can be resumed after the network returns to normal, that is, resume transmission after disconnection.
  • the newly collected information may overwrite the information with the earliest collection time.
  • Fig. 8 is an exemplary flow chart of a method for processing security information according to other embodiments of this specification.
  • the security information processing method 600 provided by other embodiments of this specification may include:
  • Step 601 according to the security event indicated by the security information, set the collection duration (or collection time) of the collection device.
  • the corresponding collection duration can be set in advance for each security event, for example, the collection duration corresponding to "someone is passing by the door” is 6 seconds, and the collection duration corresponding to "someone is staying in front of the door” is 6 seconds.
  • the collection time corresponding to "unlocking successfully” is 6 seconds
  • the collection time corresponding to "opening the door in arming mode” is 6 seconds
  • the collection time corresponding to “someone rings the doorbell” is 6 seconds
  • the collection time corresponding to "unlocking error” is 6 seconds
  • the collection time corresponding to "multiple times of unlocking errors” is 30 seconds
  • the collection time corresponding to "picking the lock” is 30 seconds
  • the collection time corresponding to "unlocking by coercion” is 60 seconds
  • the collection time corresponding to "a stranger enters the home” is 60 seconds
  • the collection time corresponding to "fire” is 60 seconds.
  • Step 601 may be performed after obtaining security information indicating that a security event occurs. After obtaining the security information, according to the security event indicated by the security information, the collection time corresponding to the security event can be determined, and then the collection time is set as the time for the collection device to collect information.
  • some embodiments of this specification can respectively set the collection duration of the collection device according to each security event, so that the duration of the relevant information collected by the collection device has a higher matching degree with the security event, and effectively improves Flexibility and adjustability of information collection.
  • a corresponding security level may be set for a security event.
  • the security level can be recorded in the security identification.
  • step 601 may include: determining the security level of the security event indicated by the security information; and setting the collection duration of the collection device to match the security level.
  • different security events may have the same or different security levels.
  • Some embodiments of this specification can set corresponding security levels for each security event. For example: the security level of "someone is passing by the door” is level 1, the security level of “someone is staying in front of the door", “unlocked successfully”, “someone rings the doorbell” is level 2, “open the door in arming mode”, “pry The security level of "lock” is level 4, and the security level of "unlocking by coercion”, “strangers entering the home” and “fire” is level 5. The higher the security level, the higher the severity.
  • a matching collection duration can be set for each security level, for example, the higher the security level, the longer the matching time for the security level.
  • the duration of the collected information can be matched with the security level of the security event, effectively improving the validity of the information.
  • Step 602 After obtaining the security information indicating the occurrence of a security event, control the collection device to collect information on the security area.
  • Step 603 Add a security mark to the information collected by the collection device, and the security mark matches the security event indicated by the security information.
  • Step 602 in FIG. 8 is the same as step 501 in FIG. 6
  • step 603 in FIG. 8 is the same as step 502 in FIG. 6 , so details are not repeated here.
  • multiple security events may occur consecutively in a short period of time. For example: someone walks by the door and then stays in front of the door. After two seconds, the person presses the doorbell, and five seconds later the person tries to The lock is picked, but the lock fails, and the person then tries to pick the lock.
  • the security events involved in this process are: someone passing by the door, someone staying in front of the door, someone ringing the doorbell, unlocking error, and picking the lock.
  • control system in some embodiments of this specification obtains the security information used to indicate the occurrence of a security event, it will control the collection device to collect information on the security area. If the control system controls the collection device according to the security information used to indicate the first security event In the process of collecting information on the security area, if the security information used to indicate the second security event is obtained, the time for the collection device to collect information on the security area can be adjusted according to the security information used to indicate the second security event (for example, prolonging the collection time), without interrupting the process of collecting information from the security area by the collection device.
  • step 601 may specifically include:
  • the collection time for information collection by the collection device is adjusted to the larger value of the remaining time of the first time and the second time.
  • the first time length is the time length set for the collection device to collect information according to the first security event. If the security information of the second security event is obtained, it means that the acquisition device is still collecting information when the security information indicating the occurrence of the second security event is obtained. At this time, according to the second security event, the duration of information collection by the collection device may be adjusted to a larger value of the remaining duration of the first duration and the second duration. Some embodiments of this specification do not need to interrupt the process of the collection device collecting information on the security area, and do not need to wait for the collection device to complete the first period of information collection before restarting information collection according to the second security event, which can realize continuous collection of information.
  • the second duration may be greater than the first duration.
  • the above-mentioned second security event may be a security event determined by a collection device (for example, a camera device), or may be a security event determined by a smart device (for example, an electronic lock).
  • a collection device for example, a camera device
  • a smart device for example, an electronic lock
  • the embodiment of this specification can adjust the period of time for the collection device to collect information from the first period of time for the second duration.
  • the electronic lock detects a security event related to unlocking
  • the duration of information collection by the collection device is adjusted from the first duration to the second duration.
  • the security information indicating the occurrence of the third security event may be obtained again.
  • the duration of information collection by the collection device may be continuously adjusted, for example, the duration of information collection by the collection device may continue to be increased.
  • some embodiments of this specification may also set an upper limit on the continuous collection time, such as 60 seconds.
  • the second duration is the sum of the first duration and the third duration, wherein the third duration matches the second security event, and the first duration matches the first security event.
  • the first security event is "someone is passing by the door”
  • the first duration matching the first security event is 6 seconds
  • the second security event is “someone is staying in front of the door”
  • the third duration of the second security event matching is 6 seconds
  • the second duration is 12 seconds.
  • the duration of information collection by the collection device is adjusted from 6 seconds to 12 seconds , so the collection device will continue to collect information for 6 seconds until the collection of information for 12 seconds is completed, and then the information collection ends.
  • the second duration is the sum of the first duration and the third duration, wherein the third duration matches the security level of the second security event, and the first duration matches the security level of the first security event.
  • the first security event is "someone is passing in front of the door”
  • the first duration matching the security level of the first security event is 6 seconds
  • the second security event is "someone is staying in front of the door”
  • the third duration matched with the security level of the second security event is 6 seconds
  • the second duration is 12 seconds.
  • the duration of information collection by the collection device is adjusted from 6 seconds to 12 seconds , so the collection device will continue to collect information for 6 seconds until the collection of information for 12 seconds is completed, and then the information collection ends.
  • the second duration is the sum of the first duration and the third duration
  • some embodiments of this specification can increase the duration of information collection by the collection device more, which is beneficial to collect more important information.
  • the second duration is the sum of the third duration and the fourth duration, wherein the third duration matches the second security event, the fourth duration is the duration between the first moment and the second moment, and the second The first moment is the acquisition time of the security information indicating the occurrence of the first security event, and the second time is the acquisition time of the security information indicating the occurrence of the second security event.
  • the first security event is "someone is passing by the door”
  • the first duration matching the first security event is 6 seconds
  • the second security event is “someone is staying in front of the door”
  • the third duration of the second security event match is 6 seconds
  • the first moment is 8:00:00 am on January 1, 2018, and the second moment is 8:00:2 am on January 1, 2018, then
  • the fourth duration is 2 seconds.
  • the security information indicating the occurrence of the second security event is obtained again, so some embodiments of this specification change the duration of information collection by the collection device from 6 The second is adjusted to 8 seconds, so after the collection device obtains the security information indicating the occurrence of the second security event, it will continue to collect 6 seconds to guide the collection of 8 seconds of information, and then end the information collection.
  • the second duration is the sum of the third duration and the fourth duration, wherein the third duration matches the security level of the second security event, and the fourth duration is the difference between the first moment and the second moment
  • the first moment is the time when the security information indicating the occurrence of the first security event is obtained
  • the second moment is the time when the security information indicating the occurrence of the second security event is obtained.
  • the first security event is "someone is passing in front of the door"
  • the first duration matching the security level of the first security event is 6 seconds
  • the second security event is "someone is staying in front of the door "
  • the third duration matched with the security level of the second security event is 6 seconds
  • the first moment is 8:00:00 am on January 1, 2018, and the second moment is 8:00 am on January 1, 2018 0 minutes and 2 seconds
  • the fourth duration is 2 seconds.
  • the security information indicating the occurrence of the second security event is obtained again, so some embodiments of this specification change the duration of information collection by the collection device from 6 The second is adjusted to 8 seconds, so after the collection device obtains the security information indicating the occurrence of the second security event, it will continue to collect 6 seconds to guide the collection of 8 seconds of information, and then end the information collection.
  • some embodiments of this specification can enable the collection device to continue collecting the duration of the duration that matches the security event indicated by the security information from the current moment after obtaining new security information. information. In this way, some embodiments of this specification can flexibly extend the collection time according to the time when the security information is obtained, and appropriately compress the data volume of the collected data under the premise of ensuring the validity of the collected information, reducing subsequent data transmission, Data pressure in data storage, data processing, data viewing and other processes.
  • some embodiments of this specification will add a security identification matching the security event to the information collected by the collection device after obtaining the security information used to indicate the occurrence of a security event, so if some embodiments of this specification are based on the security information used to indicate a security event If the security information control and collection device acquires security information indicating another security event during the process of collecting information on the security area, this manual can add security information instructions to the collected information based on the two security information respectively. The security ID that matches the security event.
  • the security information indicating the occurrence of the second security event is obtained again, it is the information collected by the collection device
  • the specific process of adding the security mark may include: after obtaining the security information indicating the occurrence of the first security event, adding the first security mark to the information continuously collected by the collection device from the third moment, wherein the third moment is At the moment when the acquisition device is controlled to collect information on the security area after the first security information is obtained, the first security identification matches the first security event.
  • a second security mark is added to the information continuously collected by the collection device from the third moment, and the second security mark matches the second security event.
  • the duration for the acquisition device to continuously collect information may exceed the first duration.
  • the information collected by the collection device includes not only information related to the first security event, but also information related to the second security event. Therefore, this manual may add a first security mark and a second security mark to the information collected by the collection device in sequence, so as to reflect the security events reflected in the information collected by the collection device.
  • the first security identification and the second security identification sequentially added to the information collected by the collection device may be stored correspondingly to the information collected by the collection device at the same time.
  • the first security identification can also be replaced with the second security identification, that is, the security identification stored corresponding to the information collected by the acquisition device is changed from the first security identification to the second security identification.
  • the security level of the second security event can be compared with the security level of the first security event first, and if the security level of the second security event is higher than the security level of the first security event, the first security mark is replaced.
  • the second security mark is the second security mark; if the security level of the first security event is higher than the security level of the second security event, the information collected by the collection device will not be added with the second security mark, and the first security mark can still be retained; if the second If the security level of the first security event is equal to the security level of the second security event, one of the first security mark and the second security mark and the information collected by the collection device can be stored correspondingly.
  • the above-mentioned specific process of adding the second security identification to the information continuously collected by the collection device from the third moment may include: adding the first security mark to the information continuously collected by the collection device from the third moment The security logo is replaced by the second security logo.
  • the first duration may be a duration for controlling the collection device to collect information according to the first security event.
  • deduplication processing may be performed on the obtained security information. Then execute the control collection device to collect information and follow-up steps in the security area.
  • deduplication can be performed according to the generation time of the security information, and if there are multiple security information at the same time, only one of them is retained.
  • the same type of security information in the security information generated within a certain time range may also be deduplicated.
  • the collected information and/or security signs can also be deduplicated, so as to prevent a large amount of information collected in a short period of time and a large number of security signs added in a short period of time from causing interference to the user.
  • the above-mentioned specific process of de-duplicating the collected information and/or security identification may be similar to the specific process of de-duplicating the security information, and will not be repeated here.
  • FIG. 9 is an exemplary flow chart of a method for processing security information according to other embodiments of this specification.
  • another security information processing method 700 provided by some embodiments of this specification may include:
  • Step 710 after obtaining the security information indicating the occurrence of a security event, control the collection device to collect information on the security area.
  • Step 720 adding a security mark to the information collected by the collection device, where the security mark matches the security event indicated by the security information.
  • Step 710 in FIG. 9 is the same as step 501 in FIG. 7
  • step 720 in FIG. 9 is the same as step 502 in FIG. 7 , so details are not repeated here.
  • Step 730 Notify the event notification of the security event through a notification method that matches the security event.
  • the notification method of the event notification may include: sending the event notification to one or more devices by one or more communication methods.
  • the communication methods used by different notification methods may be the same or different, the notification forms of event notifications sent by different notification methods may be the same or different, and the devices to which different notification methods are sent may be the same or different.
  • the above-mentioned communication methods may include but not limited to: Bluetooth communication methods, Wi-Fi, infrared communication methods, mobile communications (mobile communications) and the like.
  • the notification form of the event notification may include but not limited to: text, picture, graphic text, video, video with text, audio, web page, and the like.
  • the text contents included in the event notifications of different security events may be different. For example: the text content of the event notification of "someone is passing by the door” is “someone is passing by the door”; the text content of the event notification of “someone is staying in front of the door” is “someone is staying in front of the door”; The text contained in the event notification of "success” can be adjusted according to the unlocking method.
  • the text content is "fingerprint unlock success”; During the period, someone unlocked the house or the emergency key”; the text content contained in the event notification of "someone rang the doorbell” is “someone rang the doorbell”; the text content contained in the event notification of "unlocking error” can be adjusted according to the unlocking method.
  • the text content will be "wrong fingerprint is trying to unlock”; the text content contained in the event notification of "multiple unlock errors” can be adjusted according to the unlock method, for example: if the fingerprint method fails to unlock multiple times, the text The content is "Wrong fingerprints are frequently trying to open the lock, please confirm safety in time”; the text content contained in the "lock pick” event notification is “lock broken”; the text content contained in the "coerced lock unlock” event notification is "NG” ; The text content contained in the event notification of "a stranger entered your home” is "a stranger entered your home”; the text content contained in the event notification of "fire” is "a fire broke out, please confirm your safety and call the fire alarm immediately” .
  • the devices to which event notifications are sent may include, but are not limited to: mobile phones, computers, smart wearable devices (such as smart watches), servers, cloud devices, routers, gateways, and the like.
  • the event notification of the security event may include: information collected by the collection device with a security mark, security information, and the like.
  • both the security information and the information with the security identification can be sent to the server first, and then pushed to the user's terminal device (such as a mobile phone) by the server.
  • the server can save them, and it is convenient for the user to view anytime and anywhere, which is very convenient.
  • the user can also authorize other people (such as friends) to view.
  • Some embodiments of this specification can effectively improve the pertinence and effectiveness of the notification method by processing the event notification of the security event through a notification method that matches the security event.
  • the collection device can be a camera device, and the information collected by the collection device can be images (including still images and videos), and the camera device can save the collected images locally to the camera device, and provide the collected information Add security logo to the image.
  • the security information can be generated by a camera device or an electronic lock, and the device that generates the security information can send the security information to other devices through communication methods such as Bluetooth. Compared with sending security information through Wi-Fi, sending security information through Bluetooth can effectively save power.
  • the security information of a security event with a relatively low security level can be sent to other devices through Bluetooth; the security information of a security event with a relatively high security level can be sent to other devices.
  • Wi-Fi needs to rely on devices that provide Wi-Fi networks (such as routers), while Bluetooth does not need to rely on other devices.
  • Two Bluetooth-enabled devices can directly transmit data through Bluetooth, so the security is sent through Bluetooth. The information not only saves electric energy, but also transmits more efficiently.
  • the transmission rate of Wi-Fi is higher than that of Bluetooth, so the security information of security events with a relatively high security level can be sent to other devices faster through Wi-Fi, ensuring the timeliness of sending security information. In this way, some embodiments of this specification can effectively ensure the timeliness, safety and effectiveness of transmission of security information of security events with a relatively high security level.
  • the electronic lock can send the security information to the camera device through the communication module.
  • the camera device After the camera device receives the security information, it can collect images of the security area and add security signs to the images.
  • the camera device can send the image with the security logo to other devices via Wi-Fi, such as to a server.
  • the camera device is an electronic device including a controller and a camera. Therefore, when the security information processing method provided in some embodiments of this specification is applied to the camera device, the specific method of controlling the collection device to collect information on the security area
  • the process may include: the controller of the camera device controls the camera of the camera device to collect information on the security area.
  • other information in addition to sending the information with the security identification to other devices, other information can also be sent, such as: the type of security event, the generation time of the security event, the time when the collection device collects information, etc. sent to other devices.
  • Fig. 10 is an exemplary flowchart of a method for processing security information according to other embodiments of the present specification. Since the electronic equipment that executes the security information processing method provided by some embodiments of this specification often needs to perform data transmission with other devices, in order to improve the security of data transmission, some embodiments of this specification can also use the steps shown in Figure 10 to process data. encrypt and decode.
  • another security information processing method 800 provided in the embodiment of this specification may further include:
  • Step 801 Generate a first challenge code in response to the first challenge code request, and at least send the first challenge code to the sender of the first challenge code request.
  • the challenge code is also called a challenge password, which refers to a group of encrypted passwords generated according to the Handshake Authentication Protocol (CHAP), and is used to ensure that the user's real password is not leaked during the transmission process.
  • step 801 may be used to generate a first challenge code and feed it back to the sender of the challenge code request, so that the sender can encrypt the first data to be sent based on the first challenge code and the target key.
  • the electronic device that executes the security information processing method of some embodiments of this specification may decrypt the first data based on the first challenge code and the target key.
  • Step 802. Obtain the first data sent by the sender, and decrypt the first data based on the first challenge code and the target key, wherein the first data is encrypted by the sender using the first challenge code and the target key.
  • the target key can be stored in the sender and the electronic device that executes the security information processing method of some embodiments of this specification, respectively.
  • the electronic device executing the security information processing method provided by some embodiments of this specification can send the first challenge code request to other devices (ie, the receiver of the first challenge code request), and the receiver generates a challenge code and feeds back
  • An electronic device for executing the security information processing method provided by some embodiments of this specification can encrypt the data to be sent according to the obtained challenge code and the target key, and then send the encrypted data to other devices, and other devices can use the challenge code according to the challenge code. and the target key to decrypt the data.
  • Fig. 11 is an exemplary flow chart of a method for processing security information according to other embodiments of the present specification.
  • another security information processing method 900 provided by some embodiments of this specification may also include:
  • Step 901 Generate a first challenge code in response to the first challenge code request, and at least send the first challenge code to the sender of the first challenge code request.
  • step 901 is the same as step 801 in FIG. 10 , and will not be repeated here.
  • Step 902 Obtain the first data sent by the sender, and decrypt the first data based on the first challenge code and the target key, where the first data is encrypted by the sender using the first challenge code and the target key.
  • the target key is a default key
  • the first data is an encrypted new key
  • the new key is generated by the sender
  • step 902 is a specific implementation manner of step 802 in FIG. 10 .
  • the default key may be a key pre-set in the sender and the electronic device executing the security information processing method in some embodiments of this specification.
  • the default key may be set when the device leaves the factory, or it may be set by the sender and the electronic device implementing the security information processing method of some embodiments of this specification after establishing a binding relationship, or it may be set in other circumstances.
  • the embodiments of this specification are not limited here.
  • the new key may be a new key generated by the sender through various key generation algorithms such as a random algorithm.
  • the new key may be a one-time key, that is, it can only be used once.
  • Step 903 Generate a second challenge code in response to the second challenge code request sent by the sender, and at least send the second challenge code to the sender.
  • the first challenge code and the second challenge code are different.
  • Step 904 Obtain the second data sent by the sender, and decrypt the second data based on the second challenge code and the new key, where the second data is encrypted by the sender using the second challenge code and the new key.
  • steps 901 and 902 may decrypt the encrypted and transmitted new key based on the default key and the first challenge code, so as to obtain the new key, which ensures the security of the new key.
  • steps 903 and 904 can decrypt the encrypted and transmitted data (such as image data) based on the new key and the second challenge code, so as to obtain the transmitted data, ensuring the security of the data.
  • the data sender can send a challenge code request to the data receiver, and the data receiver generates a challenge code and returns the challenge code to the data sender.
  • the data sender encrypts the data to be sent based on the challenge code and the target key, and then sends it to the data receiver.
  • the data receiver decrypts the data according to the challenge code and the target key.
  • the electronic device when an electronic device executing the security information processing method of some embodiments of this specification needs to send data, the electronic device is the above-mentioned data sender and can execute the processing process of the data sender; when executing some implementations of this specification When the electronic device of the security information processing method of the example needs to receive data, the electronic device is the above-mentioned data receiver, and can execute the processing process of the data receiver.
  • the above-mentioned data sender may be a camera, and the above-mentioned data receiver may be an electronic lock; or, the above-mentioned data sender may be an electronic lock, and the above-mentioned data receiver may be a camera.
  • Fig. 12 is an exemplary process diagram of synchronizing security signs between a collection device and a smart device according to some embodiments of the present specification. Take the camera device as an example for the acquisition device, the electronic lock for the smart device, and the new key for the security identification. The following provides a process for synchronizing the new key between the camera device and the electronic lock, as shown in Figure 12. This process can include:
  • Step 1001 establishing a binding relationship between the camera device and the electronic lock.
  • Step 1002 the electronic lock generates a new key.
  • Step 1003 the electronic lock sends a challenge code request to the camera device.
  • Step 1004 the camera device generates a challenge code, and returns the challenge code to the electronic lock.
  • Step 1005 the electronic lock uses the default key and the challenge code to encrypt the new key.
  • Step 1006 the electronic lock sends the encrypted new key to the camera device.
  • Step 1007 the camera device returns response data to the electronic lock.
  • the camera device after the camera device obtains the encrypted new key, it can decrypt the encrypted new key based on the default key and the challenge code, so as to obtain the new key. In this way, the process shown in Figure 7 realizes the synchronization of the new key.
  • the new key can be used for encryption, which is more secure.
  • FIG. 13 is an exemplary process diagram of a smart device transmitting data to a collection device according to some embodiments of the present specification. Taking the transmission of data from the electronic lock to the camera as an example, FIG. 13 shows the process of transmitting data from the electronic lock to the camera. As shown in Figure 13, the process can include:
  • Step 1101 the electronic lock sends a challenge code request to the camera device.
  • Step 1102 the camera generates a challenge code, and returns the challenge code to the electronic lock.
  • Step 1103 the electronic lock uses the new key and the challenge code to encrypt the data to be sent.
  • the data to be sent by the electronic lock to the camera device may be various (such as Wi-Fi information data, security information, etc.), which is not limited in this embodiment of this specification.
  • Step 1104 the electronic lock sends encrypted data to the camera device.
  • Step 1105 the camera device returns response data to the electronic lock.
  • the camera device after the camera device obtains the encrypted data, it can decrypt the encrypted data based on the new key and the challenge code.
  • the camera device can collect information on the security area after obtaining the security information indicating the occurrence of a security event, and add information related to the information indicated by the security information to the collected information. The security ID that the security event matches.
  • FIG. 14 is an exemplary process diagram of data transmission from a collection device to a smart device according to some embodiments of the present specification. Taking the data transmission from the camera device to the electronic lock as an example, FIG. 14 shows the process of data transmission from the camera device to the electronic lock. As shown in Figure 14, the process can include:
  • Step 1201 the camera device sends a challenge code request to the electronic lock.
  • Step 1202 the electronic lock generates a challenge code, and returns the challenge code to the camera device.
  • Step 1203 the camera device uses the new key and the challenge code to encrypt the data to be sent.
  • the data to be sent by the camera device to the electronic lock can be various (such as signature data, system/software version number, cyclic redundancy check code, security information, etc.), which are not limited in this embodiment of the specification.
  • Step 1204 the camera device sends encrypted data to the electronic lock.
  • Step 1205 the electronic lock returns response data to the camera device.
  • the electronic lock after the electronic lock obtains the encrypted data, it can decrypt the encrypted data based on the new key and the challenge code.
  • the electronic lock can control the camera device to collect images of the security area after obtaining the security information indicating the occurrence of a security event, and add The security identification that matches the security event indicated by the security information.
  • the electronic lock sends the security mark as an image label to the camera device, or the electronic lock obtains the image captured by the camera device and adds a security mark to it.
  • some embodiments of this specification further provide a security information processing device.
  • a security information processing device may include: a collection control unit and an identification adding unit, and the collection control unit is configured to execute: after obtaining security information indicating the occurrence of a security event, control the collection device Collecting information on the security area; the label adding unit is configured to: add a security label to the information collected by the collection device, and the security label matches the security event indicated by the security information.
  • the security information processing device may further include: a duration setting unit configured to: set a duration for the collection device to collect information according to the security event indicated by the security information.
  • the duration setting unit may include: a level determination subunit and a duration setting subunit, the level determination subunit is configured to perform: determine the security level of the security event indicated by the security information; the duration setting subunit, Configured to execute: set the duration of information collection by the collection device to match the security level.
  • the duration setting unit may include: A setting subunit and a second setting subunit.
  • the first setting subunit is configured to execute: after obtaining the security information indicating the occurrence of the first security event, according to the first security event, setting the duration of information collection by the collection device as the first duration; the second setting subunit The unit is configured to execute: after obtaining the security information indicating the occurrence of the second security event, according to the second security event, adjust the duration of information collection by the collection device from the first duration to the second duration.
  • the second duration is the sum of the first duration and the third duration, wherein the third duration matches the second security event, and the first duration matches the first security event. In some embodiments, the second duration is the sum of the first duration and the third duration, wherein the third duration matches the security level of the second security event, and the first duration matches the security level of the first security event.
  • the second duration is the sum of the third duration and the fourth duration, wherein the third duration matches the security level of the second security event, and the fourth duration is the duration between the first moment and the second moment , the first moment is the time at which the security information indicating the occurrence of the first security event is obtained, and the second moment is the time at which the security information indicating the occurrence of the second security event is obtained.
  • the second duration is the sum of the third duration and the fourth duration, wherein the third duration matches the second security event, the fourth duration is the duration between the first moment and the second moment, and the first The time is the acquisition time of the security information indicating the occurrence of the first security event, and the second time is the acquisition time of the security information indicating the occurrence of the second security event.
  • the identification adding unit is collected by the collection device.
  • the information adds security identification, which is specifically configured to execute:
  • the first security identification matches the first security event; after obtaining the security information indicating the occurrence of the second security event, the information continuously collected by the collection device from the third moment is added.
  • Second security identification the second security identification matches the second security event.
  • the identification adding unit adds a second security identification to the information continuously collected by the collection device from the third moment, and is specifically configured to perform: adding the information to the information continuously collected by the collection device from the third moment The first security mark is replaced by the second security mark.
  • the security information processing apparatus may further include: a notification unit configured to: perform notification processing of the event notification of the security event in a notification manner that matches the security event.
  • the security information processing device may further include: a first generation unit and a first decryption unit, the first generation unit is configured to: generate a first challenge code in response to a first challenge code request, and at least Send the first challenge code to the sender of the first challenge code request; the first decryption unit is configured to perform: obtain the first data sent by the sender, decrypt the first data based on the first challenge code and the target key, Wherein, the first data is data encrypted by the sender using the first challenge code and the target key.
  • a first generation unit is configured to: generate a first challenge code in response to a first challenge code request, and at least Send the first challenge code to the sender of the first challenge code request
  • the first decryption unit is configured to perform: obtain the first data sent by the sender, decrypt the first data based on the first challenge code and the target key, Wherein, the first data is data encrypted by the sender using the first challenge code and the target key.
  • the target key is a default key
  • the first data is an encrypted new key
  • the new key is generated by the sender.
  • the security information processing device may further include: a second generation unit and a second decryption unit, the second generation unit is configured to perform: in response to the second challenge code request sent by the sender, generate a second challenge code, and at least The second challenge code is sent to the sender; the second decryption unit is configured to execute: obtain the second data sent by the sender, and decrypt the second data based on the second challenge code and the new key, wherein the second data is sent by the sender Data encrypted by the party using the second challenge code and the new key.
  • the smart device may monitor a large number of objects (for example, people) during work, and some of the objects do not need to be concerned about, or will not have any security risks, such as passing by accidentally passerby. If the information of all monitored objects is pushed to the user, for example, videotaping passers-by and pushed to the user's mobile APP, the user will be disturbed by a lot of useless information, and it is easy to miss key information.
  • objects for example, people
  • the method for controlling smart devices may include a method for distributing video information, so as to better push necessary video information to users.
  • identification information may be added to the collected video information, where the identification information may indicate a pushing level of the video information.
  • the video information may include a first video in the first preset area and/or a second video in the second preset area.
  • the security level of the first preset area is higher than the security level of the second preset area.
  • the corresponding push level can be set for the video information. In this way, the hierarchical push of the relevant information of the smart device is realized.
  • the security area as a first preset area and a second preset area with different security levels, based on the first video and the second video corresponding to the first preset area and the second preset area Identifying the activity scene and identity of the person can accurately identify the identity of the person, reduce or avoid blind spots, and push relevant information reasonably and effectively, so that users can take necessary countermeasures in a timely and effective manner.
  • control system may include a surveillance video distribution system.
  • monitoring video distribution system may include a video acquisition module, a track determination module, a scene determination module, an identity determination module, a push level determination module and a distribution module.
  • the video acquisition module may be used to acquire a first video in a first preset area and a second video in a second preset area.
  • the security level of the first preset area is higher than the security level of the second preset area.
  • the trajectory determination module may be used to determine the activity trajectory of the preset object based on the first video.
  • the scene determination module can be used to determine the activity scene of the preset object based on the activity track.
  • the scene determination module can also be used to determine at least one type of preset activity scene and the activity track corresponding to each preset activity scene, and at least one type of preset activity scene includes at least the first type of activity scene and the second type of activity scene.
  • Activity scene wherein, the first type of activity scene includes the scene passing through the sensitive monitoring area; the second type of activity scene includes the scene not passing through the sensitive monitoring area; in at least one type of preset activity scene, it is determined to match the activity track of the preset object and determine the preset active scene of the preset object based on the matched preset active scene.
  • the activity track corresponding to the first type of activity scene includes: at least one of a door entry track and a door exit track; the activity track corresponding to the second type of activity scene includes a normal track.
  • the scene determination module can also be used to determine the preset object movement state based on the first video, and the preset object movement state includes preset object movement or no preset object movement; Preset object movement state; determine a preset activity scene matching the preset object's activity track and preset object movement state in at least one type of preset activity scene, and determine the activity of the preset object based on the matched preset activity scene Scenes; wherein, the movement of preset objects corresponds to the first type of activity scene, and the movement of no preset object corresponds to the second type of activity scene.
  • the scene determination module can also be used to determine the state of the door lock, the state of the door lock includes an open state or a closed state; determine the state of the door lock corresponding to each preset activity scene; determine in at least one type of preset activity scene A preset activity scene matched with the activity track of the preset object and the state of the door lock, and determine the activity scene of the preset object based on the matched preset activity scene; wherein, the open state corresponds to the first type of activity scene, and the closed state corresponds to the second type of activity scene. activity scene.
  • the identity determination module can be used to identify the identity of the preset object based on the second video.
  • the identity determination module can also be used to obtain at least one preset object feature in the face feature, preset object gait feature and preset object accessory feature based on the second video, and the preset object accessory feature It includes at least one of clothing features, hairstyle features, and wearable features; based on at least one preset object feature, the identity of the preset object is determined.
  • the identity determination module can also be used to determine at least one preset identity and at least one preset object characteristic corresponding to each preset identity; A preset identity matched with the characteristics of the preset object, and the identity of the preset object is determined based on the matched preset identity.
  • the identity determination module can also be used to identify the identity of the preset object through a machine learning model based on the second video.
  • the push level determination module can be used to determine the push level of the preset object based on the activity scene and identity.
  • different identities corresponding to different activity scenarios may correspond to different preset object push levels.
  • different push levels may correspond to different push methods.
  • the distribution module can be used to distribute the first video and/or the second video to one or more users based on the push level.
  • Fig. 15 is an exemplary flow chart of a monitoring video distribution method according to some embodiments of this specification. As shown in FIG. 15 , the process 1500 may include the following steps. In some embodiments, the process 1500 may be executed by a processor.
  • Step 1501 acquiring a first video of a first preset area and a second video of a second preset area, where the security level of the first preset area is higher than that of the second preset area.
  • step 1501 may be performed by a video acquisition module.
  • the video acquisition module can acquire the first video of the first preset area and the second video of the second preset area, and the first preset area includes at least a sensitive monitoring area , the second preset area includes at least a non-sensitive monitoring area, and the security level of the first preset area is higher than that of the second preset area.
  • Preset objects are objects that may actively or passively enter the protected area.
  • the preset objects may include people, animals, movable objects and the like.
  • the protection area can include residential houses, factories, office areas, inside bank counters, etc.
  • the presence of preset objects in the security area refers to activities that have preset objects within a distance from doors, windows, etc. that is less than a certain threshold. For example, cleaning and cleaning the corridor or stairs in front of the door, cleaning windows, etc., neighbors passing by the door, visitors (friends, neighborhood committees, express delivery) knocking on the door or ringing the doorbell, property personnel checking the facilities in front of the door or moving items, family members opening the door ( Keys, fingerprints, face swiping, etc.), customers handling business at bank counters or ATM machines, unknown persons destroying windows or glass of bank counters, unknown persons destroying ATM machines, cats or birds on the balcony, express delivery, etc. Wait at the door.
  • the threshold can be determined according to requirements or experience, for example, 1 meter, 1.5 meters, 5 meters, 10 meters and so on.
  • the video acquisition module can determine whether there is a preset object in the security area through sensors, such as infrared sensors, vibration sensors, and sound sensors. If the sensor perceives people, animals, vibrations, and sounds (footsteps, breathing, clothing friction, cat meowing, wings flapping, etc.), the acquisition of the first video and/or the second video is triggered. In some embodiments, the video acquisition module may automatically trigger the acquisition of the first video and/or the second video, for example, once every 30 seconds. In some embodiments, when the security staff sees a preset object in the surveillance video, they can manually trigger the acquisition of the first video and/or the second video.
  • sensors such as infrared sensors, vibration sensors, and sound sensors. If the sensor perceives people, animals, vibrations, and sounds (footsteps, breathing, clothing friction, cat meowing, wings flapping, etc.), the acquisition of the first video and/or the second video is triggered. In some embodiments, the video acquisition module may automatically trigger the acquisition of the first video and/or the second video, for example
  • one or more cameras positioned to monitor a secured area may remain on.
  • some or all of the cameras can be turned on.
  • ordinary cameras for example, cameras with lower resolution and lower power
  • other cameras such as high-definition cameras and optical flow cameras can be turned on.
  • the infrared camera can be kept on at night.
  • the light source can be turned on and other cameras such as the high-definition camera and the optical flow camera can be turned on.
  • the video acquisition module can determine whether there is a preset object in the security area through various methods. For example, real-time analysis of video captured by an ordinary camera or an infrared camera, when it is judged that there is a human face, human figure, etc. in the video, it can be determined that there are preset objects around the door. For another example, when footsteps are detected by the sensor, it may be determined that a preset object exists in the security area. For another example, when a sensor detects that someone is moving, it can be determined that a preset object exists in the security area.
  • the second preset area includes at least a non-sensitive monitoring area.
  • the second video refers to a video whose shooting area is the second preset area, for example, a video outside an entrance door, a video outside a factory building, a video outside a window, a video outside a bank counter, and the like.
  • the second preset area can be set according to actual needs.
  • the second video can be obtained by shooting with various cameras (for example, a common camera, a high-definition camera, an infrared camera, etc.).
  • various cameras for example, a common camera, a high-definition camera, an infrared camera, etc.
  • the video acquisition module can acquire the second video from each camera through the network.
  • the video capture module can be integrated with individual cameras.
  • the video acquisition module can acquire the second video from each camera through the bus.
  • the second video can be acquired through an interface, and the interface includes but is not limited to a program interface, a data interface, a transmission interface, and the like. For example, when the monitoring video distribution system of the smart device is working, it can automatically extract the second video from the interface.
  • the first preset area may include but not limited to a sensitive monitoring area.
  • Sensitive monitoring areas refer to the areas that must be passed to enter the protected area or perform specific operations, for example, the area directly below the door frame, the area directly below the window frame, the glass partition of the bank counter and directly below it, the area connecting the balcony or roof to the interior, The operating area of the ATM machine, etc.
  • the protection area can include residential houses, factories, office areas, inside bank counters, etc.
  • the first video refers to the video whose shooting area is the first preset area, for example, the video directly under the door frame of the entrance door, the video directly under the door frame of the office area, the glass partition of the bank counter and the video directly under it, etc.
  • the first preset area can be set according to actual needs, and can also include an area directly below the door frame (for example, an area within a preset distance around the door frame directly below, such as within 50 cm).
  • the first video can be obtained by shooting with various cameras.
  • the first video can be captured by a camera that can capture a motion track, such as an optical flow camera.
  • the optical flow camera refers to a camera that can reflect an optical flow field.
  • the video images captured by the optical flow camera can reflect the movement speed and direction of the pixels in the image.
  • the video acquisition module can acquire the first video from the foregoing camera through a network.
  • the video acquisition module can be integrated with the aforementioned camera.
  • the video acquisition module can acquire the first video from the aforementioned camera through the bus.
  • the first video can be obtained through an interface, and the interface includes but is not limited to a program interface, a data interface, a transmission interface, and the like. For example, when the surveillance video distribution system of the smart device is working, it can automatically extract the first video from the interface.
  • Step 1502 determine an activity track of a preset object based on the first video.
  • the activity track may refer to a route traveled by a preset object, for example, a person.
  • the activity track may include elevator entrance>>in front of the door>>doorway>>inside the door.
  • the activity trajectory determining module can determine the activity trajectory of the preset object based on the first video through various methods. For example, optical flow analysis is performed on the first video. For another example, analyze the sequence features of the image sequence composed of each frame of the first video, input the sequence features into the trained machine learning model, and determine the activity track of the preset object.
  • Step 1503 Determine the activity scene of the preset object based on the activity track.
  • the activity scene refers to the scene of the preset object's activities, such as the scene of people going out, entering the door, passing by the door, staying in front of the door, cats entering the balcony, birds eating on the roof, etc.
  • the scene determination module can determine the activity scene of the preset object based on the activity track through various methods. For example, by training a machine learning model, the input of the machine learning model may be a curve formed by connecting positions of preset objects at various time points, and the output may be a corresponding activity scene type.
  • the scene determination module can determine one or more types of preset activity scenes and the activity tracks corresponding to each preset activity scene, and can determine the relationship with preset objects in one or more types of preset activity scenes
  • the activity track matches the preset activity scene, and determines the preset object's activity scene based on the matched preset activity scene.
  • Step 1504 identifying the identity of the preset object based on the second video.
  • the identity of the preset object refers to the relationship or role between the preset object and the user.
  • the identities of the preset objects may include family members of the owner, strangers, neighbors and friends of the owner, express delivery, etc.
  • the preset object identities may include employees of the enterprise, visitors of the enterprise, support staff of the enterprise, and the like.
  • the identity determination module can identify the identity of the preset object through various methods.
  • the identity determination module can acquire one or more preset object features based on the second video, and the preset object features can include human face features, preset object gait features, preset object accessory features, fingerprints , voiceprint, vein, iris, etc., and can determine the identity of the preset object based on one or more characteristics of the preset object.
  • the preset object accessory features may include at least one of features of accessories on a person such as clothing features, hairstyle features, and wearable features. For more information about identifying the identity of the preset object, please refer to FIG. 17 and its description in this specification.
  • the identity determination module may identify the identity of the preset object through a machine learning model based on the second video. For identifying the identity of the preset object through the machine learning model, refer to FIG. 18 and its description in this specification.
  • Step 1505 based on the activity scene and the identity, determine the pushing level of the first video and/or the second video.
  • the pushing level may reflect the level of importance and/or urgency of the first video and/or the second video.
  • the push level may include level 1, level 2, level 3, etc., wherein the higher the level, the higher the importance and/or urgency of the security area information.
  • level 1 is the highest level, corresponding to the highest degree of importance and/or urgency
  • level 2 is next
  • level 3 is the lowest level, corresponding to the lowest degree of importance and/or urgency.
  • the push level determination module may determine the push level of the first video and/or the second video related to the preset object based on the activity scene and identity of the preset object. In some embodiments, the push level determination module can further determine the first video related to the preset object based on duration, specific sound, specific action (for example, picking a lock, unlocking a lock, knocking on a door, ringing a doorbell, smashing a window), etc. and/or the feed rating of the second video.
  • Step 1506 in FIG. 15 of this specification provides an example of determining the pushing level of the first video and/or the second video.
  • Step 1506 distribute the first video and/or the second video to one or more users based on the push level.
  • different push levels correspond to different video (first video and/or second video) distribution methods.
  • level 1 push level can correspond to sending videos to users and opening the connection between terminal equipment and various cameras, and users can choose to view videos or directly view real-time images and other distribution methods.
  • the level 2 push level may correspond to a distribution method such as sending a video to a user (but not opening a connection with each camera).
  • the 3-level push level may correspond to distribution methods such as regularly sending videos to terminal devices, user accounts, or user mailboxes.
  • the distribution module can distribute videos of different push levels to users with different permissions.
  • videos with push level 1 can be sent to property owners and family members, security personnel, surveillance personnel, etc.
  • videos with a level 2 or level 3 push level may only be sent to owners.
  • Fig. 16 is an exemplary flowchart of a method for determining an active scene according to some embodiments of the present specification.
  • Step 1601 determine at least one type of preset activity scene and the activity trajectory corresponding to each preset activity scene.
  • At least one type of preset activity scene may include a first type of activity scene and a second type of activity scene; wherein, the first type of activity scene may include a scene passing through a sensitive monitoring area, for example, a visit scene, a door entry scene, an exit scene, etc.;
  • the second type of activity scenarios may include scenarios that do not pass through sensitive monitoring areas, for example, scenarios of passing by, staying in front of a door, moving items in front of a door, and the like.
  • the scene determination module can determine the activity track corresponding to each preset activity scene.
  • the activity track corresponding to the first type of activity scene may include a track of entering a sensitive monitoring area and leaving a sensitive monitoring area
  • the activity track corresponding to the second type of activity scene may include a regular track.
  • the regular trajectory refers to the action trajectory of the preset object that does not require the user's special attention, such as entering or exiting the door (for example, downstairs>>stairs>>in front of the door>>stairs>>upstairs).
  • the track of entering the sensitive monitoring area refers to the activity track corresponding to the preset object entering the sensitive monitoring area, for example, the track of entering the door.
  • the track of leaving the sensitive monitoring area refers to the activity track corresponding to the preset object leaving the sensitive monitoring area, for example, the track of going out.
  • Step 1602 Determine the preset activity scene that matches the activity track of the preset object in the at least one type of preset activity scene, and determine the preset activity scene of the preset object based on the matched preset activity scene. Activity scene.
  • the activity of the preset object can be determined according to whether the activity trajectory of the preset object matches one or several activity trajectories included in one or more types of preset activity scenes.
  • the preset object active scene corresponding to the track. As an example, it is determined that the activity track of the preset object is downstairs>>stairs>>in front of the door>>stairs>>upstairs. One or a combination of the stay scene and the scene of moving the items in front of the door. For another example, if it is determined that the activity track of the preset object is the door entry track, then the activity scene of the preset object may be the door entry scene. For another example, if the activity track of the preset object around the door is determined to be the exit track, then the activity scene of the preset object may be the exit scene.
  • the scene determination module may also determine a preset object movement state based on the first video, and the preset object movement state may include preset object movement or no preset object movement.
  • the scene determination module can determine the preset object movement state corresponding to each preset activity scene. For example, the movement of preset objects corresponds to the first type of activity scene, and the movement of no preset object corresponds to the second type of activity scene.
  • the scene determination module can determine the preset object activity track and preset object movement state corresponding to each preset activity scene. For example:
  • the preset object activity track includes the door entry track, and the preset object movement state is the preset object movement, which corresponds to the door entry scene; the preset object activity track includes the exit track, and the preset object movement state is the preset object movement, Then it corresponds to the exit scene; the preset object activity track does not include the door entry and exit tracks, and the preset object movement state is preset object movement, then it corresponds to the passing scene; the default object activity track does not include the door entry and exit tracks, and If the default object movement state is no preset object movement, it corresponds to the stay scene; the preset object activity track does not include the door entry and exit tracks, but the preset object activity track appears in a specific area behind the door (for example, the user or system setting courier delivery area), and the preset object movement status is preset object movement, it corresponds to the scene of moving items in front of the door.
  • the preset object movement state is the preset object movement, which corresponds to the door entry scene
  • the preset object activity track includes the exit track,
  • the scene determination module can determine a preset activity scene that matches the activity track of the preset object and the movement state of the preset object in at least one type of preset activity scene, and determine the preset activity scene based on the matched preset activity scene. Set the object's active scene.
  • the scene determination module can also determine the door lock state, which can include an open state or a closed state.
  • the scene determination module can determine the state of the door lock through various methods such as image recognition and door lock detection.
  • the scene determination module can determine the door lock state corresponding to each preset activity scene. For example, the on state corresponds to the first type of activity scene, and the off state corresponds to the second type of activity scene.
  • the scene determination module can determine the preset object activity track and the door lock state corresponding to each preset activity scene. For example:
  • the preset object activity trajectory includes the entry trajectory, and the door lock is in the open state during the process, which corresponds to the entry scene; the preset object activity trajectory includes the exit trajectory, and the door lock is in the open state during the process, then it corresponds to the exit scene Scene; the preset object activity track does not include the exit track and the door track, and the time when the activity track appears is below the threshold, and the door lock is closed during the process, it corresponds to the passing scene; the preset object activity track does not include the exit track Trajectories and entry trajectories, the time when the action trajectories appear is above the threshold, and the door lock is closed during this process, it corresponds to the scene of staying in front of the door; the preset object activity trajectories do not include the exit trajectories and door entry trajectories, but the default The object activity track appears in a specific area behind the door (for example, the courier storage area set by the user or the system), and the door lock is closed during the process, which corresponds to the
  • the scene determination module can determine a preset activity scene matching the activity trajectory and door lock state of a preset object in at least one type of preset activity scene, and determine the preset object based on the matched preset activity scene activity scene.
  • the scene determination module can determine a preset activity scene that matches the activity track of a preset object, the image behind the door, and the state of the door lock in at least one type of preset activity scene, and based on the matched preset activity scene Determines the active scene for preset objects. For example:
  • the preset object activity track includes the door-entry track.
  • the image behind the door indicates that the preset object moves, and the door lock is open during the process, and it is judged as the door-entry scene;
  • the preset object activity track includes the door-out track.
  • the image behind the door indicates that there is If the preset object moves, and the door lock is open during the process, it is judged as an exit scene;
  • the preset object activity track does not include the exit track and the door track, and the time when the activity track appears is below the threshold, and in the process If the door lock is closed, it is judged as passing through the scene;
  • the preset object activity trajectory does not include the exit trajectory and entry trajectory, and the time when the activity trajectory appears is above the threshold, and the door lock is closed during the process, it is judged as staying Scene;
  • the preset object activity trajectory does not include the exit trajectory and the entrance trajectory, but the preset object activity trajectory appears in a specific area behind the door (for example, the express delivery area set by the user or
  • Fig. 17 is an exemplary flowchart of a method for identifying an identity according to some embodiments of this specification.
  • Step 1701 based on the second video, obtain at least one preset object feature among face features, preset object gait features, and preset object auxiliary features, and the preset object auxiliary features include clothing features and hairstyle features , at least one of the characteristics of the wearable item.
  • step 1701 may be performed by an identity recognition module.
  • Facial features refer to facial features of a person. Facial features may include skin color, skin texture, facial features, makeup features, etc.
  • Gait characteristics refer to the gait characteristics that reflect the magnitude, direction, and action point of the force when people walk, and are the reflection of people's walking habits in the stages of foot landing, foot lifting, and support swing. Gait characteristics may include a person's step length, stride length, stride frequency, pace velocity, gait cycle, equal stride length while walking, stride frequency, pace velocity, and the like.
  • Accessory features refer to features of clothing or carrying items. For example, badges, helmets, food delivery boxes, drinking buckets, carts, etc. carried by people, and clothes, headgear, hats, etc. on people.
  • the accessory features include clothing features, hairstyle features, wear features, carry features, and the like.
  • the identity recognition module can acquire at least one preset object characteristic mentioned above through various methods. For example, facial features and appendage features are acquired through image recognition, and gait features are acquired through gait analysis of video images, etc.
  • Step 1702 Determine the identity of the preset object based on the at least one characteristic of the preset object.
  • the identity recognition module may determine the identity of the preset object based on at least one characteristic of the preset object.
  • the identity recognition module may determine at least one preset identity and at least one preset object characteristic corresponding to each preset identity.
  • At least one preset identity may include the owner's family members, strangers, neighbors and friends of the owner, express takeaways, employees of the enterprise, preset visitors of the enterprise, preset logistics objects of the enterprise, and the like.
  • At least one preset object feature corresponding to each preset identity can be customized by the user, or obtained by means of feature extraction from historical data.
  • the identity recognition module may determine a preset identity matching the at least one preset object characteristic of the preset object among at least one preset identity, and determine the preset object based on the matched preset identity identity of. For example, the identity of a courier takeaway matches the characteristics of wearing a badge, wearing a uniform, and carrying a delivery box. When the preset object has the characteristics of wearing a badge, wearing a uniform, and carrying a delivery box, the identity recognition module can preset the object The identity is set to express takeaway.
  • the identity recognition module may use a sensor capable of detecting a predetermined object, such as an infrared sensor (PIR) or a laser distance sensor, to detect the movement of a predetermined object around the door.
  • a sensor capable of detecting a predetermined object
  • the identity recognition module can perform human shape detection based on the image, and estimate the height of the preset object based on the human shape detection, and then calculate the height based on the preset object
  • the best face recognition position for example, 30cm away from the door
  • the identity recognition module can perform more accurate face recognition based on the best face recognition.
  • Human figure detection refers to the detection of the shape, outline, etc. of a preset object.
  • the identity recognition module can also prompt the best face recognition position through the echo screen interface to guide the preset object in front of the door to move to the best face recognition position.
  • the identity recognition module can also prompt the best standing position on the ground through the laser light, and the best standing position can be the position corresponding to the best face recognition position.
  • the accuracy of identification can be improved by prompting the best face position and the best standing position.
  • Fig. 18 is another exemplary flowchart of a method for identifying an identity according to some embodiments of this specification.
  • the identity recognition module may recognize the identity of the preset object through a machine learning model based on the second video.
  • the machine learning model may include, but not limited to, a neural network model, a support vector machine model, a k-nearest neighbor model, a decision tree model, and a combination of one or more of them.
  • the input of the machine learning model may include images related to the preset object, and the output of the machine learning model may include the identity of the preset object, for example, family members, express delivery, etc.
  • the machine learning model can initially mark the identity and characteristics of the preset object for the recognized preset object.
  • the push level determination module may determine the preset object push level of the preset object based on the initially marked identity and the activity scene identified in FIG. 3 .
  • the identity recognition module can remind the user to verify whether the initial mark is correct, for example, remind the user to verify in the information pushed to the user, remind the user to verify regularly (at 8:00 every night), etc. If the user thinks that the initial marking is wrong, they can re-mark.
  • the face, gait, accessory features, etc. of the relabeled preset object and the corresponding identity category can be stored in the sample library for training the machine learning model.
  • Fig. 19 is a schematic diagram of a surveillance video distribution method according to some embodiments of this specification.
  • different activity scenarios and different identities correspond to different push levels.
  • the preset object identity is a family member, a neighbor, or an express takeaway, which corresponds to a lower preset object push level (for example, level 2 or level 3); the preset object identity is a stranger, which corresponds to a higher push level (for example, level 1).
  • the track determining module can record the duration of the action track belonging to the same identity. For example, if the duration in the second type of activity scenario (for example, staying, etc.) is greater than or equal to the threshold (for example, 5 minutes), it corresponds to a higher push level (for example, level 1); if the duration in the second type of scenario If it is smaller than the threshold, it corresponds to a lower push level (for example, level 3).
  • the threshold for example, 5 minutes
  • the push level determination module may determine the push level based on scene type, duration and preset object identity. For example, when the duration of the second type of activity scene is greater than the threshold, if the preset object identity is property personnel or cleaning, etc., it corresponds to a lower push level (for example, level 3); if the preset object identity is a stranger, then Corresponds to a higher push level (eg, level 1). As shown in FIG. 6, if the duration in the first type of scenario is greater than or equal to the threshold (for example, 5 minutes), and the preset object identity is express delivery, it corresponds to a higher push level (for example, level 1).
  • the threshold for example, 5 minutes
  • the push level determination module may further determine the push level through the sound and/or motion detected by each camera. For example, in the second type of activity scenario, if an action or sound such as pressing the doorbell or knocking on the door is detected, the push level is set to level 2; if an action or sound such as picking a lock is detected, the push level is set to level 1.
  • the user can set the advance event information through the terminal device 160 .
  • the advance event information may include a preset identity, a preset time of occurrence, and the like. For example, friends will visit at 10:00, takeaway will be delivered at 11:50, etc.
  • the push level determination module may receive the advance notice event information set by the user, extract the preset identity, preset occurrence time, etc., and set the preset push level of the advance notice event information.
  • the identity determination module identifies a person corresponding to a preset person identity at a preset occurrence time
  • push is performed according to a preset push level. For example, the push level determination module sets the push level of the event "takeaway will be delivered at 11:50" to level 1, and at 11:50 the identity determination module recognizes that there is a takeaway person in front of the door, and then uses the push method corresponding to the level 1 push level (for example, dial the user's phone) push to the user.
  • the level 1 push level for example, dial the user's phone
  • one or more collection devices of the smart device can collect various related information of different security areas.
  • the smart device control method provided in some embodiments of this specification includes a comprehensive monitoring information management method, which can increase The type and scope of monitoring information of smart devices to improve user experience.
  • the relevant information may include a first video of a first preset area, and the corresponding operation includes generating comprehensive monitoring information;
  • the control method may include: judging the probability of a preset object in the first preset area based on the first video ; when the probability of a preset object in the first preset area meets the preset condition, acquire a second video of the second preset area; and generate comprehensive monitoring information based on the first video and/or the second video.
  • the first preset area may not be exactly the same as the second preset area. In some embodiments, the first preset area and the second preset area may be different areas. In some embodiments, the control system may separately process the first video and the second video to obtain different information of preset objects. In some embodiments, the control system may process the first video to obtain probability information of a preset object in the first preset area, an activity track of the preset object, and the like. In some embodiments, the processing method of the control system on the first video may include but not limited to feature extraction algorithm analysis (for example, human figure feature extraction algorithm, face feature extraction algorithm, etc.), optical flow algorithm analysis, and the like.
  • feature extraction algorithm analysis for example, human figure feature extraction algorithm, face feature extraction algorithm, etc.
  • the control system can process the second video (for example, feature extraction algorithm, face recognition algorithm, etc.) to confirm whether there is a preset object in the second area and whether there is When the object is preset, the feature information of the preset object is extracted, so as to identify the identity of the preset object.
  • the control system can also generate comprehensive monitoring information based on the first video and the second video.
  • the control system may generate comprehensive monitoring information based on the first video and the second video when it is confirmed that there is a preset object in the first area or the second area.
  • control system may include a comprehensive monitoring information management system, and the comprehensive monitoring information management system may be used to monitor the smart device itself and its management area, and manage related monitoring information.
  • the processor of the comprehensive monitoring information management system may include a first acquisition module, a second acquisition module, and a generation module.
  • the first obtaining module may be used to obtain a first video of a first preset area, and judge the probability of a preset preset object in the first preset area based on the first video.
  • the first obtaining module may use an optical flow algorithm to analyze the first video to obtain optical flow information in the first video, and judge that there is a preset prediction in the first preset area based on the optical flow information. Set the probability of the object. For more details about the first preset area, the first video, and the preset object, refer to FIG. 20 and its related descriptions, and details will not be repeated here.
  • the second acquiring module may be configured to acquire the second video in the second preset area when the probability of the preset object in the first preset area satisfies a preset condition.
  • a preset condition For more details about the preset conditions, the second preset area, and the second video, please refer to FIG. 20 and related descriptions, which will not be repeated here.
  • the generating module may be used to generate comprehensive monitoring information based on the first video and the second video and send it to the terminal device.
  • comprehensive monitoring information and terminal equipment For more details about the comprehensive monitoring information and terminal equipment, refer to FIG. 20 and its related descriptions, which will not be repeated here.
  • the first acquisition module, the second acquisition module and the generation module may be different modules in one system, or one module may realize the functions of the above two or more modules.
  • each module may share one storage module, or each module may have its own storage module.
  • Fig. 20 is an exemplary flowchart of a comprehensive monitoring information management method according to some embodiments of the present specification. As shown in FIG. 20, the process 1900 may include one or more of the following steps. In some embodiments, the process 1900 may be executed by a processor.
  • Step 1901 acquire a first video of a first preset area, and judge the probability of a preset target object in the first preset area based on the first video.
  • step 1901 may be performed by the first obtaining module.
  • the first preset area may be an area within a preset range around the smart device, and the location of the first preset area is related to the location of the smart device.
  • the smart device can be set on the periphery of the space that needs to be managed (for example, the management area), for example, set on the door, lock, door frame, window or wall of the space, etc.
  • the first preset The area may be a 60cm*60cm area around the door, door frame or window of the space.
  • the collection range of the first preset area may be changed due to being blocked by structures around the smart device.
  • the collection range of the first preset area when the first preset area is the area around the door of the management area, when the door is opened, the collection range of the first preset area may include a part of the area outside the door (such as the area in front of the door) and a part of the area inside the door (such as , the entrance area in the room); when the door is closed, the collection range of the first preset area may only include a part of the area outside the door.
  • the processor may acquire optical flow information based on the first video, and the optical flow information may be object motion information included in the video information.
  • the moving route information of the object may be included in the optical flow information.
  • the optical flow information may include movement track information of a hand of a human body.
  • the optical flow information can be obtained by processing the first video with an optical flow algorithm.
  • the first acquisition device may include a first camera device.
  • the first video can be captured by a first camera.
  • the first camera device may be set on the top of the door frame of the management area to acquire the first video of the first preset area.
  • the first camera device can be various types of image acquisition devices, for example, including but not limited to ordinary cameras, high-definition cameras, visible light cameras, infrared cameras, night vision cameras, monocular cameras, Binocular camera etc.
  • the first camera device can process the captured video through a built-in processor to obtain optical flow information.
  • the first camera device can be set on the wall. In some embodiments, the first camera device can also be set at other positions where the first preset area can be photographed.
  • the first camera device may remain turned on to continuously capture the first video.
  • the first camera device may be turned on to acquire the first video.
  • Other monitoring information may be security-related information monitored by other devices.
  • the other device may be a motion detection device.
  • the motion detection device can detect the motion information of objects in the surrounding area of the smart device. Motion detection devices may include, but are not limited to, infrared sensors, cameras, ultrasonic detection devices, and the like. When the motion detection device detects that there is a moving object in the area around the smart device, the first camera device can be turned on to acquire the first video.
  • a preset object may refer to an object that is set to be monitored.
  • the preset target object may be a human body.
  • the preset target object may also be other objects, for example, other animals such as cats and dogs.
  • the first video may be analyzed in various feasible ways, and based on the analysis result, the probability of the preset target object existing in the first preset area is judged.
  • the preset target object may be a human body
  • the first video may be analyzed through image recognition technology to determine the probability of a human body in the first video.
  • the probability of human body features such as limbs and facial features of the human body in the first video may be determined through image recognition technology, and the probability may be determined as the probability of the preset target object in the first preset area.
  • the probability that a preset target object exists in the first preset area may also be determined in other ways. For example, the probability that the motion change in the first video is a specific motion change of the preset target object is identified by the motion recognition technology, and the probability is determined as the probability that the preset target object exists in the first preset area.
  • Step 1902 when the probability of the preset target object in the first preset area satisfies the preset condition, acquire the second video of the second preset area.
  • step 1902 may be performed by the second acquiring module.
  • the preset condition may refer to a preset condition that should be satisfied when it is judged that there is a preset target object in the first preset area.
  • the preset condition can be greater than 80%, that is, when the probability of the preset target object in the first preset area is greater than 80%, there is a preset target object in the first preset area; When the probability that the preset target object exists in the area is less than or equal to 80%, there is no preset target object in the first preset area.
  • the second preset area may be an area within another preset range near the smart device.
  • the second preset area may be an area of 100cm*100cm at the corridor outside the door frame of the management area.
  • the second preset area may overlap with the first preset area.
  • the second collection device may include a second camera device.
  • the second video of the second preset area may be acquired by a second camera.
  • the second camera device may include but not limited to an optical camera, an infrared camera, and the like.
  • the second camera device can also be used to obtain other monitoring information. For example, it can also be used to capture motion information of objects in the second preset area.
  • the second camera device can be installed at any position capable of taking pictures of the second preset area.
  • the second camera device can be installed on the lock of the door of the management area, and can also be installed on the doorbell.
  • the second preset area and the first preset area may include monitoring blind areas of each other.
  • the second preset area may include a monitoring blind area of the first preset area, or the first preset area may include a monitoring blind area of the second preset area.
  • the first camera device 420 may be set on the top of the door frame to acquire a first video of a first preset area 440 around the door frame.
  • the second camera device 430 may be set on the doorbell to acquire a second video of a second preset area 450 in the corridor area. It can be seen from FIG.
  • the processor may also control the switch device of the smart device, so as to prevent the switch device itself or the target object from When the smart device is operating, it collides with or pinches the target object, causing damage to the target object.
  • the preset target object is a human body
  • the processor can control the opening and closing of the door of the management area to prevent human body damage during operation. cause some damages.
  • the first camera device may continuously acquire the first video in the first preset area.
  • the security information of the smart device may also be acquired. For more information on obtaining security information of smart devices, refer to FIG. 22 and its related descriptions, which will not be repeated here.
  • the probability of the preset target object existing in the first preset area may be continuously judged. Specifically, real-time processing may be performed on each frame of the first video to determine the probability that a preset target object exists in the first preset area.
  • the first video can be divided into multiple sub-time periods according to the time of acquisition, and the first video acquired in each sub-time period can be continuously judged. Specifically, when the first camera device captures 5 seconds of the first video, the processor can judge the probability of the preset target object in the first preset area based on the part of the first video.
  • a duration for continuously acquiring the second video can be set to turn off the second camera.
  • Step 1903 based on the first video and image information, generate comprehensive monitoring information and send it to the terminal device.
  • step 1903 may be performed by a generating module.
  • a terminal device may be a terminal associated with a smart device.
  • the terminal device may be a mobile terminal such as a mobile phone or a tablet computer, and the application program related to the smart device is installed in the mobile terminal.
  • the terminal device may also be other devices.
  • the terminal device may also be an indoor unit associated with the smart lock.
  • the terminal device can receive the comprehensive monitoring information sent by the processor and display it to the user.
  • the user can issue instructions through the terminal device to control the smart device or other related devices. For example, when the smart device is a smart lock, the user can issue an anti-lock command through the terminal device to control the door lock to perform anti-lock.
  • the comprehensive monitoring information is information related to security.
  • the comprehensive surveillance information may include one or more surveillance videos. For example, first video. Another example is the first video and the second video.
  • the surveillance video can be directly sent to the terminal device as comprehensive surveillance information.
  • the surveillance video may be processed and sent to the terminal device as comprehensive surveillance information. For more details on processing surveillance video, refer to FIG. 23 and its related descriptions, which will not be repeated here.
  • the comprehensive monitoring information may also include reminder information.
  • the comprehensive monitoring information may include reminder information that the door lock is successfully unlocked.
  • the control system can also add identification information to the comprehensive monitoring information.
  • the identification information of the comprehensive monitoring information may include an event tag.
  • the event tag may refer to a tag corresponding to a security information event included in the comprehensive monitoring information.
  • Security information events may refer to security-related events that occur on smart devices and their surrounding areas. For example, when the smart device is set around the door of the management area, security information events may include events such as entering the door, exiting the door, unlocking the door lock, picking the lock, opening the door under duress, and multiple trial and error unlocking events.
  • the event tags in the comprehensive monitoring information may also include door entry, exit, door lock unlocked, lock picking, coercion to open the door, multiple trial and error unlocking, etc.
  • the security information event may include events such as opening the window, closing the window, unlocking the window, and breaking the window.
  • the event tags in the comprehensive monitoring information may also include window opening, window closing, window unlocking, window breaking, etc.
  • the processor may determine the type of event tag based on the security information of the smart device. For example, the type of security information is determined as the type of event tag. For more details about security information, refer to FIG. 22 and its related descriptions, which will not be repeated here.
  • the event tag may also include the time when the security information event occurred. Based on the occurrence time of the security information event and the time information of the comprehensive monitoring information, an event tag corresponding to the comprehensive monitoring information is determined. For example, if the smart device is a smart lock, the entry time reported by the built-in program of the smart lock is 08:01:00, October 12, 2033, and the start time of the surveillance video in the comprehensive monitoring information is 08:00, October 12, 2033. 00:00, and the end time is 08:03:10, October 12, 2033, then you can add an event tag as entry to the comprehensive monitoring information. For more details about the comprehensive monitoring information, refer to FIG. 23 and its related descriptions, which will not be repeated here.
  • the event label corresponding to the comprehensive monitoring information may also be determined based on other methods. For example, the content of the surveillance video in the comprehensive surveillance information may be analyzed, and an event label corresponding to the comprehensive surveillance information may be determined based on the analysis result.
  • the identification information of the comprehensive monitoring information may include security level information, so as to classify the comprehensive monitoring information into multiple security levels.
  • the security level may be used to reflect the security status of the smart device and the surrounding area corresponding to the comprehensive monitoring information. The higher the security level, the safer the smart device and the surrounding area corresponding to the comprehensive monitoring information.
  • the security level of the first preset area may be higher than that of the second preset area.
  • security levels may include Level 1, Level 2, and Level 3.
  • level 1 can indicate that the smart device and the surrounding area corresponding to the comprehensive monitoring information are in a high-risk state
  • level 2 can indicate that the smart device and the surrounding area corresponding to the comprehensive monitoring information are in a medium-risk state
  • level 3 can indicate that the comprehensive monitoring information is in a medium-risk state
  • the corresponding smart devices and surrounding areas are in a low-risk state.
  • the security level may also include other levels, for example, may also include level 4, and level 4 may indicate that the smart device and the surrounding area corresponding to the comprehensive monitoring information have no security risks.
  • the security level corresponding to the comprehensive monitoring information may be determined based on event tags.
  • the smart device is a smart lock
  • the security level corresponding to the comprehensive monitoring information is level 2.
  • the security level corresponding to the comprehensive monitoring information is level 3.
  • the security level corresponding to the comprehensive monitoring information may be determined based on the content of the monitoring video in the comprehensive monitoring information. For example, portrait recognition can be performed on the surveillance video in the comprehensive surveillance information to determine whether the people in the surveillance video are family members, wherein the faces of the family members can be pre-recorded. When the people in the surveillance video do not include family members, the security level of the comprehensive surveillance information is determined to be level 1.
  • the content of the monitoring information to be sent to the terminal device may be determined based on the security level of the comprehensive monitoring information. For example, when the security level of the comprehensive monitoring information is level 1, the monitoring information may include monitoring video and reminder information; when the security level of the comprehensive monitoring information is level 2, the monitoring information may include monitoring video; when the security level of the comprehensive monitoring information When it is level 3, the monitoring information can only include reminder information or not send any information.
  • the reminder method may refer to a method for reminding the user when the terminal device receives the comprehensive monitoring information.
  • the reminder method may include vibration, ringing, pushing to the notification bar, and the like.
  • the reminder method can be vibration and ring; when the security level of the comprehensive monitoring information is level 2, the reminder method can be vibration; when the security level of the comprehensive monitoring information When it is level 3, the reminder method can be pushed to the notification bar of the terminal device.
  • the comprehensive monitoring information may also be sent to other terminals. For example, when the security level of the comprehensive monitoring information is level 1, the comprehensive monitoring information can also be sent to the alarm terminal of the police. For example, when the security level of the comprehensive monitoring information is level 1, the comprehensive monitoring information can be sent to the alarm receiving center of the police computer.
  • the processor may determine the monitoring mode of the smart device.
  • the monitoring mode may be an operating mode for the smart device to monitor the door lock and its surrounding area.
  • monitoring modes may include a strong security mode and a weak security mode.
  • the strong security mode may refer to the operation mode of the smart device and related devices (eg, the first camera device, the second camera device, etc.) when there is a high demand for monitoring whether the smart device and the surrounding area are safe.
  • the weak security mode may refer to an operating mode of the smart device and related devices when the demand for monitoring whether the smart device and surrounding areas are safe is low.
  • the monitoring mode may also include other modes, for example, normal mode.
  • the normal mode may refer to the operation mode of the smart device and related devices when the demand for monitoring the safety of the smart device and the surrounding area is relatively general.
  • the security requirement corresponding to the normal mode is higher than that of the weak security mode, but lower than that of the strong security mode.
  • the smart device may operate in different ways. Take the smart lock installed on the door as an example. When the monitoring mode is strong security mode, the smart lock will automatically lock after the door is closed. When you need to open the smart lock in strong security mode, you need to enter the unlock password and verify your fingerprint to unlock it successfully. When the monitoring mode is the weak security mode, the smart lock will not automatically lock after the door is closed. When the smart lock needs to be turned on in the weak security mode, you only need to enter the unlock password or fingerprint to unlock it.
  • the security level of the comprehensive monitoring information that needs to be sent to the terminal device may also be different. Based on the monitoring mode and the security level of the comprehensive monitoring information, it may be determined whether it is necessary to send the comprehensive monitoring information to the terminal device. When the monitoring mode is strong security mode, the comprehensive monitoring information of all security levels needs to be sent to the terminal device. When the monitoring mode is the weak security mode, only part of the comprehensive monitoring information of the security level needs to be sent to the terminal device. For example, the comprehensive monitoring information needs to be sent to the terminal device only when the security level of the comprehensive monitoring information is level 1.
  • the monitoring mode of the smart device can be adjusted.
  • the monitoring mode can be adjusted by the user.
  • the smart device is a smart lock installed on the door.
  • the monitoring mode of the door lock can be adjusted to a strong security mode.
  • the smart device is a smart lock installed on the door.
  • the processor can lock the door.
  • Monitor mode is automatically adjusted to weak security mode.
  • the monitoring mode may be automatically adjusted based on the user's distance from the smart device.
  • the monitoring mode when it is detected that the distance between the user and the smart device is less than 30m, the monitoring mode is automatically adjusted to a weak security mode. When it is detected that the distance between the user and the smart device is greater than or equal to 30m, the monitoring mode will be automatically adjusted to the strong security mode.
  • the monitoring mode of the smart device can also be automatically adjusted based on the security level of the comprehensive monitoring information.
  • the smart device is a smart lock installed on the door. When the security level of the comprehensive monitoring information is level 1, the monitoring mode of the door lock can be automatically adjusted to a strong security mode.
  • the monitoring mode of the smart device may also be determined in various other feasible ways, for example, automatically adjusted according to the preset user's work and rest time.
  • the security level of the comprehensive monitoring information can also be adjusted based on the monitoring mode. For example, when the monitoring mode is strong security mode, in addition to the comprehensive monitoring information of security level 1, the comprehensive monitoring information of each security level can be lowered by one level on the basis of the original security level. For example, the security level of certain comprehensive monitoring information is level 2, and when the monitoring mode is strong security mode, the security level of the comprehensive monitoring information is adjusted to level 1.
  • the control method in some embodiments of this specification realizes judging whether to acquire the first video according to other monitoring information, avoids unnecessary use, and reduces energy consumption.
  • the comprehensive monitoring information management method of some embodiments of this specification by setting the comprehensive monitoring information of different security levels, the comprehensive monitoring information of different security levels is set with different content and reminder methods of the comprehensive monitoring information, which can be used in different security situations. , to remind users in different ways, improving user experience.
  • the user can adjust the monitoring mode according to the needs, avoiding waste of resources when not needed, and improving the user's sense of security during use.
  • An embodiment of the present specification enables users to understand related situations more clearly from different angles by combining the first video and the second video.
  • An embodiment of the present specification also judges whether it needs to be shot by the second camera according to the content of the first video, avoiding unnecessary use and reducing energy consumption.
  • Fig. 22 is an exemplary flowchart of determining comprehensive monitoring information and sending it to a terminal device according to some embodiments of the present specification. As shown in FIG. 22 , the process 2100 includes one or more steps as follows. In some embodiments, the process 2100 may be performed by a processing module.
  • Step 2101 when the probability of the preset target object in the first preset area does not satisfy the preset condition, acquire security information of the smart device.
  • the security information may include security information fed back by the smart device.
  • the security information may include but not limited to unlocking, locking, picking, failure to unlock, and duress to unlock, etc.
  • the processor can acquire security information through a sensor in the smart device.
  • the smart device may be a smart lock installed on the door.
  • the smart lock obtains through a sensor in the lock body that the lock body is being picked open by an external tool.
  • the processor can obtain the security information as the lock picker.
  • a sensor is provided on the back of the outer door lock, and the sensor is closely attached to the door body. When an external tool tries to open the outer door lock, the sensor on the back of the outer door lock will be separated from the door body. Once the sensor is separated from the door body, the door lock can determine that the external tool has a lock picking action. Thus, the processor may determine that the security information is lockpicking.
  • the processor can acquire security information through a built-in program of the smart device.
  • the smart device can be a smart lock set on the door, and the processor can determine the operation of the smart device based on the built-in program in the smart lock as entering the door, going out, unlocking the door lock, opening the door under coercion, multiple times of trial and error unlocking, etc., thereby Determine the corresponding security information as entering the door, exiting the door, unlocking the door, opening the door under duress, multiple times of trial and error unlocking, etc.
  • it can be determined that the operation on the smart device and the corresponding security information are door entry.
  • the smart device based on the built-in program feedback of the smart lock, it is also possible to determine the operation of the smart device and its corresponding security information as going out based on the action of opening the door lock body from the indoor door lock body and closing the lock body from the outdoor door lock body.
  • the fingerprint unlocking by the user's ring finger can also be pre-set as coercion unlocking.
  • the program in the door lock obtains the user's ring finger fingerprint unlocking
  • the acquired security information is duress unlocking.
  • the security information may also be acquired in other feasible ways.
  • Step 2102 when the security information is abnormal, acquire the second video of the second preset area.
  • the security information it may be determined according to preset conditions whether the security information is abnormal.
  • the smart device is a smart lock installed on the door. According to preset conditions, it can be determined that when the security information is lockpicking, repeated unlocking failures, or duress to unlock, the security information is abnormal.
  • Step 2103 based on the first video and the second video, generate comprehensive surveillance information and send it to the terminal device.
  • Step 2103 is consistent with step 1903 in this specification. For the specific details of step 2103, refer to step 1903 above in this specification, and will not be repeated here.
  • relevant reminder information may be sent to the terminal device.
  • the smart device is a smart lock set on the door.
  • a reminder message of successful unlocking can be sent to the terminal device after the unlocking is successful.
  • no operation may be performed.
  • An embodiment of this specification can determine whether there is an abnormality in the smart device through security information even if the smart device does not capture the human body, and monitor the smart device to ensure its safety.
  • Fig. 23 is another exemplary flow chart of determining comprehensive monitoring information and sending it to a terminal device according to some embodiments of this specification. As shown in FIG. 23, the process 2200 may include one or more of the following steps. In some embodiments, the process 2200 may be executed by a processor.
  • Step 2201 acquire first time information of a first video and second time information of a second video.
  • the first time information may refer to time information of the first video.
  • the first time information may include the start time, duration, end time, etc. of the first video.
  • the first time information may be automatically generated when the first camera captures the first video of the first preset area.
  • the second time information may refer to time information of the second video. Similar to the first time information, the second time information may also include the start time, duration, end time, etc. of the second video.
  • the second time information may be automatically generated when the second camera captures the second video in the second preset area.
  • Step 2202 based on the first time information of the first video and the second time information of the second video, combine the first video and the second video to generate a surveillance video.
  • the first video and the second video may be associated based on a difference between the first time information and the second time information.
  • the difference between the first time information and the second time information may refer to a time difference between certain specific time information in the first time information and certain specific time information in the second time information.
  • a type of a specific time information in the first time information may be the same as a type of a specific time information in the second time information.
  • the time information in the first time information and the second time information may both be start time.
  • the start time in the first time information is 08:00:00, October 12, 2033
  • the start time in the second time information is 08:00:10, October 12, 2033, then the first time information and the The difference between the second time information is 10S.
  • multiple first time information and second time information may be acquired. Based on the first time information of the first video and a preset threshold, the second time information whose difference is smaller than the preset threshold may be selected from a plurality of second time information, and the corresponding second video may be determined based on the second time information, Then associate the first video with the filtered second video. For example, in the first time information of a first video, the start time is 08:00:00, October 12, 2033, the preset threshold is 15S, and the start time is 08:00:10, October 12, 2033. second time information, determining a corresponding second video based on the second time information, and associating the above-mentioned first video with the second video.
  • the first video may also be screened out based on the second time information of the second video, and the screening method is similar to that of screening out the second video based on the first time information of the first video, and here Let me repeat.
  • the end time in the second time information of a second video is 10:00:00, October 11, 2033
  • the preset threshold is 20S
  • the end time is 10:00:15, October 11, 2033.
  • First time information determining a corresponding first video based on the first time information, and associating the first video with the second video.
  • the associated first video and second video may be combined, and the combined video may be used as the surveillance video.
  • the way of combining the first video and the second video may be to place the first video and the second video in one video, separate the display screen of the video into different windows, and the first video and the second video Display in different windows.
  • the way of combining the first video and the second video may also be to divide the first video and the second video and combine them into one video.
  • the first 5S of the video is the first 5S of the first video
  • the 6S to 10S of the video are the first 5S of the second video
  • the 11S to 15S of the video are the 6S to 10S of the first video, and based on this And so on until the combination of the first video and the second video is completed.
  • the first video and the second video may also be combined in other ways, for example, the first video and the second video may also be spliced into a new video by using panoramic video splicing technology.
  • Some embodiments of the present specification associate the first video and the second video based on the difference between the first time information and the second time information, and combine the associated first video and the second video, which can be accurately obtained from Quickly find the first and second videos related to the same event among multiple videos.
  • the monitoring video is generated by combining the associated first video and the second video, which is convenient for the user to view.
  • the first video and the second video may be combined to generate a surveillance video.
  • the first video and the second video may be combined based on the video duration in the first time information and the second time information.
  • the video with longer video duration in the first time information and the second time information is used as the previous section of the surveillance video, and after the previous video ends, the video duration in the first time information and the second time information is compared.
  • the short video is used as the last section of the surveillance video.
  • the surveillance video may include time information, and the time information of the surveillance video may be determined based on the first time information and the second time information.
  • the first time information starts at 08:00:00, October 12, 2033, and ends at 08:03:00, October 12, 2033.
  • the second time information starts at 08:00:10 on October 12, 2033 and ends at 08:03:10 on October 12, 2033.
  • the time information in the monitoring time may be a start time of 08:00:00, October 12, 2033, and an end time of 08:03:10, October 12, 2033.
  • the first video and the second video may also be combined in other ways.
  • the first video and the second video are spliced based on the spatial position relationship between the first preset area and the second preset area. Specifically, the overlapping areas of the first preset area and the second preset area are spliced, so that the first video and the second video are spliced into one video.
  • Step 2203 sending the surveillance video to the terminal device as comprehensive surveillance information.
  • FIG. 20 For more details about the comprehensive monitoring information and terminal equipment, refer to FIG. 20 and its related descriptions, which will not be repeated here.
  • the user by combining the first video and the second video, it is beneficial for the user to grasp related surveillance videos from different angles while viewing, and to have a more comprehensive understanding of related events.
  • Fig. 24 is another exemplary flow chart of determining comprehensive monitoring information and sending it to a terminal device according to some embodiments of this specification. As shown in Fig. 24, the process 2300 may include the following steps. In some embodiments, the process 2300 may be executed by a processor.
  • Step 2301 the processor acquires optical flow information of a first preset area.
  • the processor acquires optical flow information of a first preset area.
  • the processor determines whether the operation of the smart device is abnormal based on the first video.
  • the smart device may be a smart lock set on a door, and operations on the smart device may include operations such as opening and closing the door.
  • the smart device may determine whether the operation is abnormal based on the state of the smart device displayed in the first video.
  • the smart device is a smart lock installed on the door.
  • the processor can check the status of the door body and the door frame based on the first video. When the door body and the door frame reach the preset state, the door body operates normally; when the door body and the door frame do not reach the preset state, the door body operates abnormally.
  • the preset state may be a preset state of the door body and the door frame when the door is closed.
  • the operation of the smart device is abnormal based on the motion monitoring information in the first preset area.
  • Step 2303 when the operation is abnormal, the processor sends the first video to the terminal device as comprehensive monitoring information.
  • comprehensive monitoring information and terminal equipment For more details about the comprehensive monitoring information and terminal equipment, refer to the description elsewhere in this manual, and will not be repeated here.
  • Step 2304 when the operation is normal, the processor acquires the locking information of the smart device.
  • the locking information may refer to information about whether the smart device is locked.
  • the locking information of the smart lock may refer to information about whether the smart lock is locked.
  • the lock information can be obtained in various ways, for example, a sensor or a built-in program in the smart device.
  • Step 2305 the locking information of the smart device is unlocked.
  • Step 2306 the processor sends the first video to the terminal device as comprehensive monitoring information.
  • Some embodiments of this specification monitor the operation of the smart device to determine whether the operation is abnormal, and at the same time obtain the lock information of the smart device, which can prevent the user from shutting down the smart device but not closing it properly.
  • Fig. 25 is an exemplary flowchart of judging whether the operation of the smart device is abnormal according to some embodiments of the present specification. As shown in Figure 25, the process 2400 includes the following steps. In some embodiments, the process 2400 may be executed by a processor.
  • Step 2401 based on the first video, acquire motion monitoring information in a first preset area, where the motion monitoring information in the first preset area includes at least a moving object and corresponding position information of the moving object at each time point.
  • the motion monitoring information may refer to detected motion information of objects in the first preset area.
  • the motion monitoring information may include attribute information of the moving object and position information corresponding to the moving object at each time point.
  • the moving object may refer to a moving object in the first video.
  • the moving object includes at least the smart device itself.
  • the moving object may also include other objects, for example, a human body.
  • the attribute information of the moving object may include the type of the moving object and the name of the moving object.
  • the attribute information of the moving object may include that the moving object is a door and the name of the door.
  • the attribute of the moving object may include that the moving object is a person and the name of the person.
  • the type of the moving object can be determined by performing image recognition on the first video, and the name of the moving object can be based on image recognition technology to extract the features of the moving object, and then compared with the features in the database to determine the name of the moving object.
  • position information corresponding to the moving object at each time point may be determined based on the positions of the moving object in the frame at different times in the first video.
  • the motion monitoring information may also include other content, for example, the motion speed of the moving object.
  • the motion monitoring information may include: motion object 1: type is person, name is Wang, motion object 2: type is door, name is door. 08:03:10 Wang is at position A, and the door is at position B; 08:03:15 Wang and the door are both at position C.
  • Step 2402 based on the motion monitoring information, it is judged whether there is a human body and/or a foreign object causing the smart device to shut down abnormally.
  • the position of the closing point may refer to the position where the smart device should be when it is closed.
  • the position of the closing point may be the position where the door body should be when the door is closed.
  • the positions of the human body and the smart device overlap at a certain moment, and after this moment, the smart device stops moving or moves toward the position at the previous moment, and the smart device finally stops at a position other than the closing point position, which may be It is determined that there is a human body causing the smart device to shut down abnormally.
  • Step 2403 when there is a human body and/or a foreign object causing the smart device to shut down abnormally, it is determined that the operation of the smart device is abnormal.
  • Some embodiments of the present specification determine whether there is an abnormality in the smart device through motion monitoring information, thereby improving the accuracy of the monitoring result.
  • Some embodiments of this specification also provide an integrated monitoring information management device.
  • the device includes a processor and a memory; the memory is used to store instructions, and when the instructions are executed by the processor, the device is caused to implement the operations corresponding to the aforementioned integrated monitoring information management method.
  • Some embodiments of this specification also provide a computer-readable storage medium.
  • the storage medium stores computer instructions, and after the computer reads the computer instructions in the storage medium, the computer implements the aforementioned integrated monitoring information management method.
  • the user in addition to monitoring the indoor and outdoor environments (that is, inside and outside the management area) by the collection device of the smart device, the user can also perform face recognition through the collection device of the smart device for identity verification.
  • face recognition through the collection device of the smart device for identity verification.
  • the control method in some embodiments of this specification also includes a method for improving image quality.
  • the trigger information of the smart device may include face recognition information
  • the corresponding operation performed by the smart device may include processing the face recognition information.
  • processing the face recognition information may include setting the exposure weight of the face area of the face recognition image, and processing the face area of the current image acquired by the acquisition device based on the exposure weight to obtain an image of the face area .
  • setting the exposure weight of the face area in the current image a clearer image of the face area can be obtained.
  • Fig. 26 is an exemplary flow chart of processing a face area image according to some embodiments of the present specification. Referring to FIG. 26 , it shows a flow 2500 of an embodiment of a method for improving the image quality of a face area image according to this specification, including the following steps:
  • Step 2501 collect the current image under preset lighting conditions.
  • the preset lighting condition may be the lighting intensity preset by the collection device when collecting information.
  • the preset lighting conditions may include light intensity under strong light suppression conditions.
  • the preset lighting condition may include turning off the lighting intensity under the condition of strong light suppression.
  • the preset lighting condition may also include the illuminance after the lighting is enhanced.
  • the preset lighting condition may be the default lighting level of the smart device.
  • the preset lighting condition may also be the lighting intensity set by the user.
  • the face area can be obtained by performing face recognition on the collected current image.
  • the surveillance camera integrated with human face and human figure recognition functions will have corresponding coordinates after the human face is recognized, and the corresponding area coordinates can be directly read from the corresponding chip processing platform of the camera.
  • the recognized human-shaped area is relatively large, it will affect the effect of subsequent adjustments.
  • the human-shaped area can be estimated and cut according to the proportion of the human-shaped area occupied by the human-shaped area, so as to lock the area that needs to be adjusted in the human-face area as much as possible.
  • Step 2502 setting the exposure weight of the face area of the current image based on the exposure value of the current image.
  • the acquisition device may acquire the exposure value of the current image in response to turning off the strong light suppression of the current image.
  • the strong light suppression function of the camera is automatically activated according to the light conditions of the scene, the brightness of the image will be adjusted, which will affect the subsequent adjustment of the exposure value inside and outside the face area, so the strong light of the camera needs to be adjusted.
  • the light suppression function is turned off to avoid the influence on the subsequent adjustment of the exposure value.
  • the exposure weight of the face area in the current image may be set based on the exposure value of the current image.
  • the exposure weight may include a ratio of the exposure value of the face area to the exposure value of the current image.
  • the exposure weights inside and outside the face area can be set based on the preset exposure scene corresponding to the exposure value, where the preset exposure scene includes the exposure weights inside and outside the face area corresponding to the exposure value.
  • Step 2503 Process the face area of the current image based on the exposure weight to obtain an image of the face area.
  • the current image may be processed based on the set exposure weights to obtain an image of the face area.
  • the exposure value of the face area may be reset based on the set exposure weight, so that the reset exposure value is easier for face recognition.
  • the exposure weight inside and outside the face area of the current image can be determined according to the exposure values inside and outside the face area in the exposure scene corresponding to the preset image exposure value, so as to realize the adjustment of exposure weights according to different exposure weights inside and outside the face area. Adjusting the exposure value of the image can make the face area clearer in backlight and other scenes, thereby solving the problem of not being able to see the face clearly. Compared with the existing technology, it does not need to improve the photosensitive sensor, and the cost is lower. Inefficiency is higher. And when there are multiple human face regions, corresponding adjustments can be made at the same time to make the image of the target human face region clearer.
  • the exposure weight can be divided into 8 levels, and the higher the level, the more obvious the corresponding exposure picture, and the higher the brightness of the picture.
  • the exposure weight in the face area can be set to 8
  • the exposure value outside the face area can be set to 0 or 1 according to the exposure value of the camera, so that the face area will be clearer.
  • Fig. 27 is an exemplary flow chart of processing a human face region image according to other embodiments of the present description.
  • a method 2600 for processing a face area image to improve image quality may include the following steps:
  • Step 2601 performing face recognition on the current image to obtain a face area
  • Step 2602. Obtain the exposure value of the current image in response to turning off the strong light suppression of the current image
  • Step 2603 Based on the preset exposure scene corresponding to the exposure value, set the exposure weight inside and outside the face area respectively, so that the current image is processed based on the set exposure weight, and the image of the face area is obtained, wherein the preset exposure scene Including the exposure weight inside and outside the face area corresponding to the exposure value;
  • Step 2604 after detecting that the face in the current image is not within the range of the target area, restore the exposure weight of the current image to the value before setting;
  • Step 2605 switch the strong light suppression of the current image to the state before it is turned off after the restoration is completed.
  • restoring the exposure weight of the current image to the value before setting may include: setting the exposure weight outside the face area to a preset value; and restoring to the value before setting after a preset time delay. For example, after detecting that the face or figure in the lens disappears, set the weight outside the face area to 1, and after a certain time delay, such as 1 second, after the weight value outside the face area is set, restore the camera to the original Some set the weight value of the image exposure, and then delay for 1 second to ensure that the weight value is set, and then turn on the strong light suppression to complete the entire process from recognition to face. The whole process of returning to normal settings after disappearing.
  • the exposure value of the whole image is fixed, and it will not be adjusted by region, but after the detection of the face area and the adjustment of the exposure weight by region, the exposure weight outside the face area If it has been set to 0 or 1, then after the exposure weight adjustment is completed and the face disappears, the exposure weight of the image can be restored to 0 or 1, that is, it should be consistent with the exposure value outside the face area.
  • the weight values inside and outside the face area after performing face recognition on the current image and obtaining the face area, it may further include:
  • the preset exposure scene sets the exposure weights inside and outside the re-divided face area respectively, so that the exposure weights in the face area (corresponding to one or more first sub-areas) are greater than the exposure weights outside the face area, which can Make the image of the face area clearer.
  • Fig. 28 is a schematic diagram of re-dividing the current image according to some embodiments of the present specification.
  • the image can be divided into multiple sub-regions of 15*15, and correspondingly, the original coordinates need to be converted into 15*15 , so that the corresponding exposure weights are adjusted according to the adjusted coordinates in subsequent processing.
  • it is necessary to adjust the original image data and it is necessary to confirm whether the image has been flipped and rotated before conversion, and it is necessary to convert to the original coordinates before performing division conversion.
  • the coordinates of the upper left corner of the face image are (x1, y1), and the lower right corner is (x2, y2), then the coordinates after converting to the 15*15 area are:
  • ae_x1 (480-x1)/32;
  • ae_y1 (480-y1)/32;
  • ae_x2 (480-x2)/32;
  • ae_y2 (480-y2)/32.
  • the coordinates of the face area re-divided into 15*15 areas are obtained, and then the corresponding face area is determined. Then, according to the converted coordinates, corresponding weight values can be adjusted inside and outside the face area, so that the adjustment of the exposure weight is faster and the adjustment efficiency is improved.
  • the exposure weight in the face area can be set to 1, and the exposure weight outside the corresponding face area can be set to 0, which can highlight the image of the face, so that the obtained image of the face is more clear.
  • the image quality improvement method may further include: acquiring a current brightness value of the environment; and performing supplementary light processing when the current brightness value is less than a supplementary light threshold.
  • Fig. 29 is an exemplary flow chart of performing supplementary light processing on a face area image according to some embodiments of the present specification.
  • the flow 2700 of supplementary light processing may include the following steps:
  • Step 2701 obtain current configuration information.
  • Step 2702 If the current configuration information satisfies the first condition, perform visible light supplementary light processing based on the current brightness value, and the first condition indicates that the device has preset supplementary light parameters.
  • Step 2703 If the current configuration information satisfies the second condition, perform visible light supplementary light based on the preset threshold, and the second condition indicates that the device performs visible light supplementary light based on the preset threshold.
  • Step 2704 if the configuration information satisfies the third condition, perform infrared supplementary light based on the preset mode, and the third condition indicates that the supplementary light mode is turned off.
  • the ADC value of the photosensitive sensor associated with the camera can be read in real time to determine whether the current ambient brightness needs to be supplemented.
  • the supplementary light can be performed according to the visible light supplementary light mode set by the user, and the conditions to be met can be normally on (the brightness of the supplementary light can be set), automatic, off, and the corresponding supplementary light threshold.
  • the supplementary light When the photosensitive ADC threshold is lower than the supplementary light threshold and the supplementary light mode is normally on, the supplementary light is turned on with a preset brightness value; when the supplementary light mode is automatic, the supplementary light is automatically turned on with an appropriate brightness value according to the photosensitive ADC value, Visible fill light mode is used to keep the picture in color, which can retain more image details and provide better display effect.
  • the current configuration information may include the supplementary light mode preset by the user, and the corresponding supplementary light will be performed according to the supplementary light mode set by the user when performing supplementary light.
  • the first condition is automatic supplementary light. After the configuration information, the fill light will be controlled according to the current brightness value of the environment to perform fill light according to the preset comparison table; the second condition is to perform fill light according to the user's set value, when the brightness of the environment is lower than the set threshold , turn on the visible fill light for fill light according to the fill light value set by the user, that is, the brightness value after fill light is constant; the third condition is to turn off the fill light, and when it is recognized that the fill light is turned off by the user, it will not Visible light will be used to fill in the light. At this time, infrared fill-in light can be used to fill in the light.
  • the traditional infrared fill light can also be supported for fill light.
  • the priority of turning on the infrared fill light is lower than that of the visible fill light.
  • the infrared supplementary light will be enabled for supplementary light, and the supplementary light mode of the infrared supplementary light can also include normal on, automatic, off and other modes, and this manual will not repeat them here.
  • Fig. 30 is an exemplary flow chart of enhancing a face area image according to some embodiments of the present specification.
  • the picture quality improvement method 2800 for enhancing the face area image may include the following steps:
  • Step 2801 acquire the performance parameters of the terminal equipment
  • Step 2802 determine the corresponding image quality enhancement engine ID based on the performance parameters
  • Step 2803 Send the image quality enhancement engine identifier to the terminal device, so that the terminal device invokes a corresponding image quality enhancement engine based on the image quality enhancement engine identifier to perform enhancement processing on the image of the face area.
  • an image quality enhancement engine is integrated in the terminal device, wherein the image quality enhancement engine is a functional module or an application integrated with an image quality enhancement processing algorithm, and the user can set whether to enable the image quality enhancement function.
  • the image quality enhancement engine can support image quality enhancement processing of machine learning algorithms and deep learning algorithms, and can adaptively select corresponding algorithms for processing according to the performance of the terminal device.
  • the machine learning algorithm can be selected according to the CPU main frequency, memory, NPU computing power and other performance conditions of the terminal device. When the performance is good, a machine learning algorithm with a relatively complex processing flow and better effect can be selected for processing. For ordinary devices, a more matching deep learning algorithm can be used to enhance the image quality.
  • the machine learning algorithm may include steps such as noise reduction processing, dynamic compensation, color enhancement, sharpness enhancement, contrast enhancement, rendering and display.
  • the user can choose to view the video or the implemented picture, and send each frame of data decoded by encoded data such as H264/H265 to the picture enhancement engine for a series of algorithm processing Then render and display; based on noise reduction processing, by comparing several adjacent frames of images, the non-overlapping information (that is, noise) is automatically filtered out, thereby showing a relatively pure and delicate picture; based on motion compensation, two adjacent frames of images In between, new frames are estimated and reconstructed through algorithms and inserted for display, thereby improving motion picture blur. Sharper images can be obtained by sharpening.
  • Some embodiments of this specification also provide an image quality improvement device, which can perform each step of the image quality improvement method described in any of the above embodiments, and the device can include: a face recognition module, an exposure value acquisition module, a weight Adjustment module.
  • the face recognition module is used to carry out face recognition to the current image to obtain the face area;
  • the exposure value acquisition module is used to obtain the exposure value of the current image after the strong light suppression in response to closing the current image;
  • the weight adjustment module uses The exposure weights inside and outside the face area are respectively set based on the preset exposure scene corresponding to the exposure value, so that the current image is processed based on the set exposure weight to obtain an image of the face area, wherein the preset exposure scene includes Exposure weights inside and outside the face area corresponding to the exposure value.
  • the image quality improving device may further include: a parameter setting module, configured to restore the exposure weight of the current image to the set value after detecting that the face in the current image is not within the range of the target area After the recovery is completed, switch the glare suppression to the state before it was turned off.
  • a parameter setting module configured to restore the exposure weight of the current image to the set value after detecting that the face in the current image is not within the range of the target area After the recovery is completed, switch the glare suppression to the state before it was turned off.
  • restoring the exposure weight of the current image to the value before setting may include: setting the exposure weight outside the face area to a preset value; value.
  • the image quality improvement device may further include: a supplementary light module, configured to acquire a current brightness value of the environment, and perform supplementary light processing when the current brightness value is less than a threshold for turning on the light.
  • a supplementary light module configured to acquire a current brightness value of the environment, and perform supplementary light processing when the current brightness value is less than a threshold for turning on the light.
  • the image quality improvement device may further include: an image segmentation module, configured to re-divide the current image, and convert the original coordinates of the face area into the re-divided coordinate system, based on the exposure value corresponding to The preset exposure scene sets the exposure weights inside and outside the re-divided face area respectively.
  • an image segmentation module configured to re-divide the current image, and convert the original coordinates of the face area into the re-divided coordinate system, based on the exposure value corresponding to The preset exposure scene sets the exposure weights inside and outside the re-divided face area respectively.
  • the image quality enhancement device may further include: an image quality enhancement module, configured to acquire performance parameters of the terminal device, determine a corresponding image quality enhancement engine identifier based on the performance parameters, and send the image quality enhancement engine identifier to the terminal device, The terminal device invokes a corresponding image quality enhancement engine based on the image quality enhancement engine identifier to perform enhancement processing on the image of the face area.
  • an image quality enhancement module configured to acquire performance parameters of the terminal device, determine a corresponding image quality enhancement engine identifier based on the performance parameters, and send the image quality enhancement engine identifier to the terminal device, The terminal device invokes a corresponding image quality enhancement engine based on the image quality enhancement engine identifier to perform enhancement processing on the image of the face area.
  • users can remotely monitor and view the changes of preset objects (including people or objects) in the security area (for example, management area, surrounding area, etc.) of the smart device .
  • preset objects including people or objects
  • the security area for example, management area, surrounding area, etc.
  • the smart device can be a smart lock installed on the door, and the user may pay attention to the entry and exit of the preset object, for example, whether someone enters and exits, the time of entry and exit, and so on.
  • the user can learn about the surrounding conditions of the smart device by checking the collected and stored relevant information (for example, video recording).
  • relevant information for example, video recording
  • the smart device control method may include a method of determining index information of a preset object.
  • the change information of the preset object in the management area is determined, and based on the change information and the existing index information, the current index information of the preset object can be generated.
  • the user can clearly and concisely understand the changes of the preset objects concerned.
  • the step of performing corresponding operations when determining the index information in the control method of the smart device may include: obtaining existing index information of the management area; determining change information of preset objects in the management area based on the relevant information; Change information and existing index information to determine the current index information.
  • the relevant information includes information related to the management area associated with the smart device, for example, may include video information collected in the management area.
  • the relevant information may include preset objects.
  • the processor may determine whether the relevant information includes a preset object.
  • the preset objects may include people (eg, trusted users), animals (eg, pets), objects, etc. within the management area.
  • the number of preset objects may include one or more.
  • the change information of the preset object may include whether the position of the preset object relative to the management area of the smart device (eg, inside or outside the management area) changes, and the change time when the change occurs.
  • the change information of the preset object may include whether the preset object enters or exits the management area.
  • the change information may include the identification information (for example, face recognition information) of the preset object by the smart device, for example, the time when the preset object leaves the door, the time when the preset object enters the door, etc.
  • the management area may include the area inside the door, for example, the door where the smart device is set may be a house door, and the management area may be the interior of the house.
  • the preset target may be a specific target person.
  • the preset objects may include one or more of the trusted users.
  • the preset objects may include wards (for example, elderly people, children, etc.) among the trusted users.
  • the control system can judge whether there is a preset object in the management area through the collected relevant information.
  • the control system can confirm the identity of the preset object through the face recognition information in the relevant information.
  • the control system can determine whether the preset object enters or exits the door, and can record the time of entry and exit, so as to determine the change information of the preset object in the management area.
  • current index information when a preset object in the management area changes, current index information may be generated based on the change and existing index information of the smart device.
  • the change information may also include that the preset objects in the management area have not changed, and at this time, the existing index information may be determined as the current index information.
  • the index information may be the names of preset objects in the management area and the display of their information.
  • the existing index information in the smart device may be referred to as existing index information.
  • the index information determined in the smart device within the current time range may be referred to as current index information.
  • the existing index information may include initial index information, and the initial index information may be created based on trusted preset objects in the management area.
  • the existing index information may include the index information of the preset object in the management area recorded and stored by the control system before the current time.
  • the index information may include the identity of the preset object and whether the preset object is within the management area.
  • the index information may also include preset times when objects enter and/or leave the management area.
  • the current index information of the preset object can be obtained by integrating the determined change information with the existing index information.
  • the control system can update the existing index information based on the current index information, so that the existing index information recorded by the smart device can reflect the actual situation of the preset object.
  • the user can query the information of the preset object and/or the change information of the preset object in the management area, and display the information.
  • the current index information may include all change information of all preset objects, in which the user can search for the target person, so as to inquire about the entry and exit of the target person during the target period.
  • the user can remotely monitor and view the changes of preset objects (including people or objects) in the security area (eg, management area, surrounding area, etc.) of the smart device.
  • the smart device may include a smart storage device, and the smart storage device may be used for warehousing/storage in production and life.
  • users can comprehensively record and manage the types and quantities of inbound and outbound items.
  • the intelligent device control method and system provided by some embodiments of this specification include an index information management method and system that can be used to identify, process and query item information.
  • the relevant information collected by the collection device includes information related to the security area (for example, management area, third preset area, etc.)
  • the preset object information and/or the change information of the preset object based on the access information and the preset object information and/or the change information of the preset object, determine the index information.
  • the index information management method performs security verification after responding to the user's access request, conducts comprehensive statistics on the user's access information and item information in the item management area, and establishes an index, so as to realize intelligent control and management of the management area.
  • the management query of item information facilitates the application of smart storage devices in home life and office space.
  • the smart device control system includes an index information management system.
  • the index information management system may include a server, a network, an intelligent storage device, a user terminal and a storage device.
  • the management area of the smart storage device may include an item management area, and the item management area may be used to manage the user's items.
  • the index information management system can perform security verification on users who access the item management area of the smart storage device, and perform corresponding operations according to the verification results. For example, after confirming that the user is a trusted user, the user can open the smart storage device or perform other operations.
  • the index information management system can obtain item information in the smart storage device, and identify the item based on the item information. For example, attributes of items are determined, and items are classified according to the attributes of items.
  • the index information management system can provide services for trusted users. For example, provide services for trusted users to access the item management area, or provide item query services.
  • FIG. 31 is a block diagram of an exemplary smart storage device 20 according to some embodiments of the present specification.
  • the smart storage device 20 includes an input module 210 , a detection module 220 , a lock body 230 and a processor 260 .
  • the smart storage device 20 can provide an item management area, which is a closable space, and the opening and closing of the item management area is controlled by the lock body 230 .
  • the article management area of the smart storage device 20 may not be completely closed after being closed.
  • preset objects eg, user's items
  • the input module 210 is configured to receive user instructions.
  • the input module 210 may include one or more input devices of the one or more acquisition devices.
  • the user instruction includes a request to access an item management area, and the user instruction may also include a query request, a setting request, a management request, and the like.
  • the input module 210 includes devices capable of receiving user instructions, such as a keyboard, a mouse, a touch screen, a microphone, and the like.
  • the input module is further configured to collect user information.
  • the input module includes a device for collecting user information, such as a fingerprint collection device, a finger vein collection device, a palmprint collection device, a palm vein collection device, a face collection device, an iris scanning device, an image collection device (such as cameras, cameras), retina scanning devices, image acquisition devices, microphones, password input devices, etc.
  • a device for collecting user information such as a fingerprint collection device, a finger vein collection device, a palmprint collection device, a palm vein collection device, a face collection device, an iris scanning device, an image collection device (such as cameras, cameras), retina scanning devices, image acquisition devices, microphones, password input devices, etc.
  • the input module 210 obtains the user's face information and fingerprint information through a camera and a fingerprint collection device.
  • the detection module 220 is configured to acquire detection information of items in the item management area of the smart storage device 20 .
  • the detection module 220 may include one or more detection devices of the one or more acquisition devices.
  • the detection module 220 may consist of part or all of one or more collection devices.
  • the detection module 220 can obtain trigger information.
  • the trigger information may include a user's request to access a management area associated with the smart device.
  • the detection module 220 may collect relevant information.
  • the relevant information collected by the detection module 220 may include item information and/or item access information in the item management area.
  • the detection module 220 includes sensors that can be used to obtain detection information, such as image acquisition devices, laser sensors, infrared sensors, ultrasonic sensors, pressure sensors, and the like. As an example only, the detection module 220 acquires image signal data of the item through an image acquisition device, and/or acquires infrared signal data of the item through an infrared sensor, and/or acquires pressure signal data of the item through a pressure sensor. In some embodiments, the detection module 220 is further configured to acquire status information of the lock body 230 . The status information of the smart storage device 20 is used to indicate the working status of the lock body 230 (such as the unlocked status and the locked status). In some embodiments, the detection module 220 also includes a sensor for obtaining state information of the lock body 230 , such as a pressure sensor, a contact sensor, a Hall sensor, etc. installed on the lock body 230 .
  • detection information such as image acquisition devices, laser sensors, infrared sensors, ultrasonic sensors, pressure sensors, and the like.
  • the lock body 230 is configured to perform an unlocking operation based on a control signal sent by the processor 260 .
  • the processor 260 sends an unlock control signal to the lock body 230 based on the access request of the trusted user; in response to receiving the unlock control signal, the lock body 230 performs an unlock operation to allow the trusted user to access the item management area.
  • the lock body 230 is further configured to perform a locking operation based on a control signal sent by the processor 260 .
  • the processor 260 judges that the lock body 230 is in the open state based on the state information of the lock body 230 obtained by the detection module 220 and is in the open state for more than a predetermined time (such as 10 minutes, 15 minutes or 20 minutes), and the processor 260 sends The lock body 230 sends a lock control signal; in response to receiving the lock control signal, the lock body 230 performs a lock operation to close the item management area.
  • the smart storage device may include a smart storage box, and the smart storage box may include a box body that is locked or unlocked by the lock body 230 .
  • the box may include a closure for storing items.
  • the processor 260 is configured to process the information and/or data in the index information management process, including obtaining the user's access request; based on the access request, perform security verification on the user's identity, and control the lock body 230 to perform an unlocking operation to allow trusted users to access The item management area, and generate access information; obtain the item information and/or item access information of the item management area during the trusted user's visit; determine the index information based on the item information and/or item access information, and the index information is at least based on The access information is confirmed.
  • the specific content of the processor 260 processing information and/or data in the process of index information management please refer to other parts of this specification, such as FIG. 32, FIG. 33 and their descriptions, which will not be repeated here.
  • the smart storage device 20 may further include a positioning module 240 .
  • the location module 240 is configured to acquire location information of the smart storage device 20 .
  • the positioning module 240 includes a positioning device for obtaining location information, such as a GPS positioning device, a Beidou positioning device, a GPS Beidou dual-mode positioning device, and the like.
  • the smart storage device 20 may further include an output module 250 .
  • the output module 250 is configured to output a signal to the outside based on the control signal of the processor 260, such as displaying index information to the user, or sending an alarm message indicating that the security verification has not been passed to the user.
  • the output module 250 includes a device for outputting visual signals, such as a display.
  • the output module 250 includes a device for outputting audio signals, such as a speaker.
  • FIG. 32 is a block diagram of an exemplary index information management system 30 according to some embodiments of the present specification.
  • the system 30 includes a security module 310 and a management module 320 .
  • the security module 310 and the management module 320 can be implemented on the server 260 or the smart storage device 20 , such as the processor 260 .
  • the security module 310 is configured to obtain a user's request to access the item management area.
  • the security module 310 can obtain the user's access request by communicating with the input module 210 of the smart storage device 20 , or the security module 310 can obtain the user's access request by communicating with the terminal device 160 .
  • the security module 310 is also configured to perform security verification on the user's identity based on the user's access request, so as to allow the trusted user to access the item management area and generate access information.
  • the item management area is a closable space.
  • Access information refers to the collection of information associated with an access request. As an example only, access information may include, but not limited to, user identification (such as user ID, user name, user avatar, etc.), access event, access time (such as unlock time, lock time) and the like.
  • the security module 310 controls the lock body 230 of the smart storage device 20 to perform an unlocking operation, so as to allow trusted users to access the item management area of the smart storage device 20 .
  • the security module 310 includes a user authentication sub-module.
  • the access request carries user information corresponding to the user identity
  • the user verification submodule is configured to determine whether the user is a trusted user based on the user information of the user, and issue an alarm message when the determination result is no.
  • trusted user refers to a user who has access rights to the item management area of the smart storage device 20 . There can be one or more trusted users.
  • the user can set and enter the user information of the user during access to the smart storage device 20 in the initialization state (such as the state of the smart storage device 20 first used, the state of data reset, and the state of data loss), and the processor 260 Based on the user settings, the user is determined to be a trusted user, so that the user obtains access rights.
  • a trusted user may set and enter user information of other users during access to the smart storage device 20 , so that other users may obtain access rights, and the processor 260 determines that other users are trusted users based on user settings.
  • the user verification submodule is further configured to compare the user information of the user who sends the access request with the preset trusted user, and determine whether the user is a trusted user based on the comparison result.
  • the term "user information" refers to a collection of information that can be used to identify a user.
  • the user information of the default trusted user is pre-entered and stored in the storage device 180 , and the user verification sub-module can call the user information of the default trusted user stored in the storage device 180 .
  • user information may include one or more of fingerprint information, finger vein information, palmprint information, iris information, retina information, face information, voiceprint information, and digital password information.
  • the user verification submodule is further configured to issue an alarm message when the judgment result is negative.
  • the alarm information sent in response to judging that the user is not a trusted user can be used to warn the access behavior of the untrusted user.
  • the alarm information may include local alarm information and remote alarm information, and the alarm information may be sent in at least one form of a voice signal, a vibration signal, and a text signal.
  • the local alarm information is sent through the output module 250 of the smart storage device 20 .
  • the image information of the untrusted user can be acquired by the image acquisition device of the input module 210 by displaying the image information of the untrusted user on the display of the output module 250 and displaying the text message that the access request of the untrusted user has been rejected.
  • the remote alert message is sent through an output device of the trusted user's terminal device 160 .
  • the terminal device 160 receives the remote alarm information, displays the image information of the untrusted user through the display, and broadcasts the voice information indicating that an untrusted user access event occurs to the smart storage device 20 through the loudspeaker.
  • the management module 320 is configured to obtain item identification information of the item management area during the trusted user's visit, the item identification information includes item information and/or item access information; based on the item identification information, determine index information, and the index information is at least based on the access information Sure.
  • the index information is at least used to indicate access information of items of trusted users during access.
  • the term "access period" refers to the time period from the start of executing the access request to the end of the execution of the access request. As an example only, the access period may be a time period from when the lock body 230 of the smart storage device 20 performs an unlocking operation to when the lock body 230 is locked.
  • the management module 320 further includes an identification sub-module 321 .
  • the identification sub-module 321 is configured to obtain the detection information obtained by the detection module 220; based on the detection information obtained by the detection module 220, determine the item identification information of the item management area during the trusted user's visit.
  • the item identification information includes item information, such as the attribute (type, material, etc.) and value of the item.
  • the item identification information includes item access information, such as item access status and access time.
  • the identification submodule 321 may include an access determination unit; the access determination unit is configured to determine the access information of the item based on the detection information.
  • the access information of the item is used to indicate the access status of the item, including the access status of all items that have been accessed or not accessed in the item management area of the smart storage device 20 during the access period.
  • access states may include deposited, withdrawn, and owned. Among them, inherent refers to the state that the item has been stored in the smart storage device 20 before the access request is executed (ie, before the item management area is opened), and has not been taken out after the access request is executed (ie, after the item management area is closed).
  • the changes shown by the chronological arrangement of probe information can be used to indicate the access status of the item.
  • the access determination unit may determine the access status of the item based on changes in the detection information. For example, the access status of an item can be determined by analyzing changes in pressure sensing data (eg, an increase in pressure indicates that an item is deposited). For another example, the access state of the item can be determined by comparing the image data at the unlocking time point and the locking time point of the smart storage device 20 .
  • the access determining unit is further configured to determine the access time of the item based on the detection information, for example, the data change time stamp of the pressure sensing data may be used as the deposit time or withdrawal time of the item.
  • the identification submodule 321 may include an attribute determining unit; the attribute determining unit is configured to determine the attribute of the item based on the detection information.
  • the term "properties of an item" refers to a collection of characteristics inherent to the item itself.
  • the attribute of the item may include but not limited to the type, specification, material, etc. of the item.
  • the types of items may include but not limited to jewelry, currency, certificate documents, calligraphy and painting works of art, etc.
  • the specifications of items may include but not limited to weight, volume, size, color, etc.
  • the materials of items may include but not limited to metal, precious stones , paper, textiles, etc.
  • the attribute determination unit may determine the type of the item based on the detection data. For example, image data or laser scanning data can be used to identify the type of items through an AI recognition model. In some embodiments, the attribute determination unit may determine the material of the item based on the detection data. For example, for jewelry items, infrared sensing data can be used to determine the material of the item through infrared spectrum analysis. In some embodiments, the attribute determination unit may determine the specification of the item based on the detection data. For example, pressure sensor data can be used to determine the weight of an item through analytical calculations.
  • the attribute determination unit is further configured to determine the item attribute of the item through a multi-data fusion algorithm based on the detection information.
  • Multi-data fusion integrates incomplete information about the attributes of the same item provided by multiple sensors to form a relatively complete and consistent perceptual description, thereby achieving more accurate identification and judgment functions. Determining item attributes through multi-data fusion algorithm can improve the accuracy and efficiency of item attribute recognition.
  • the identification sub-module 321 also includes a value determination unit.
  • the value determination unit is configured to determine the value of the deposited item based on the probe information. The value of an item usually refers to the market value of the item.
  • the value determination unit may retrieve a reference item matching the deposited item based on the detection information, and determine the value of the deposited item based on the value of the reference item. In some embodiments, the value determination unit may also determine the value of the deposited item based on user input.
  • the identification sub-module provides a basis for comprehensive management and quick query of item information by quickly identifying items, determining item information, item access status and access time.
  • the management module 320 further includes an index submodule 322; the index submodule 322 is configured to determine index information based at least on the item identification information and the access information.
  • the item identification information includes item information and/or access information of the item.
  • the index sub-module 322 further includes an index generation unit; the index generation unit is configured to create index information based at least on access information, item identification information and preset rules when the created index information is not detected.
  • the created index information may be stored in the storage device 180 .
  • the term "preset rules" refers to rules for establishing, querying and displaying index information. For the smart storage device 20 in the initialization state, the index submodule 322 does not detect the index information that has been created (such as the index information that has not been created, the index information that has been created is lost, or the index information that has been created is reset), then the index generation unit Index information can be created.
  • the index generating unit may create the index information based on a management request of the trusted user requesting to create the index information. In some embodiments, the index generation unit can automatically create index information after the lock body 230 enters the locked state from the unlocked state.
  • the index submodule 322 further includes an index update unit; the index update unit is configured to update the index information based at least on the access information and the item identification information when the created index information is detected.
  • the index update unit can call and update the index information stored in the storage device 180 .
  • the index sub-module 322 further includes a query display unit; the query display unit is configured to perform query display based on the index information. In some embodiments, the query display unit is further configured to determine index information corresponding to the query request based on the query request of the trusted user, and display the index information corresponding to the query request. In some embodiments, the query display unit may be used for local query display. For example, the query display unit can acquire the query request received by the input module 210 and display index information corresponding to the query request through the display of the output module 250 . In some embodiments, the query display unit may be used for remote query display. For example, the query display unit can obtain the query request sent by the terminal device 160 , and the query display unit sends the index information corresponding to the query request to the terminal device 160 and displays it on the display of the terminal device 160 .
  • index information For details about creating index information, updating index information, and querying and displaying based on index information, please refer to other parts of this specification, such as step 2904 in FIG. 34 and its description, and will not be repeated here.
  • the index sub-module establishes the index information associated with the user and the item, so that the user can quickly understand the current trend and historical trend of the item through the index information query, and enables the user to fully understand the item usage of other trusted users of the smart storage device, which is convenient for item identification. Overall management.
  • Fig. 33 is a block diagram of another exemplary index information management system 40 according to some embodiments of the present specification.
  • the system 40 includes a security module 410 and a management module 420 .
  • the security module 410 and the management module 420 can be implemented on the server 160 or the smart storage device 20 , such as the processor 260 .
  • the security module 410 is configured to obtain a user's request to access the item management area; based on the user's access request, perform security verification on the user's identity and generate access information, and the item management area is a closable space.
  • security module 410 includes a user authentication submodule.
  • user verification sub-module For the specific content of the user verification sub-module, please refer to other parts of this specification, such as the user verification sub-module and its description of the security module 310 in FIG. 32 , which will not be repeated here.
  • the security module 410 also includes a location monitoring submodule; the location monitoring submodule is configured to judge whether the smart storage device 20 is located in the safe area based on the location information acquired by the positioning module 240, and send a message when the judgment result is no. Alert message.
  • the term “safety area” refers to the permitted safe movement range of the smart storage device 20 .
  • the safe area can be a circle, ellipse, polygon or other irregular graphic area that expands outward from a center.
  • safe areas may be determined based on user input. For example, when the location monitoring sub-module obtains a setting request from a trusted user to set a security area by communicating with the input module 210 or the terminal device 160, the location monitoring sub-module determines the security area based on the setting information carried by the setting request, and sets the security area The setting information for is stored in the storage device 180.
  • the security zone can be automatically generated based on preset security zone setting rules.
  • the smart storage device 20 (such as the system 40) is in an initialization state.
  • the position monitoring submodule obtains the current position coordinates of the smart storage device 20 through communication with the positioning module 240, and generates a Position coordinates as the center, a circular area with a preset value (such as 10 meters, 20 meters or 30 meters) as the radius, set the circular area as a safe area, and store the setting information of the safe area in the storage device 180 , in case the subsequent position monitoring unit 212 calls it when performing position monitoring.
  • a preset value such as 10 meters, 20 meters or 30 meters
  • the alarm information sent in response to judging that the location of the smart storage device 20 is not within the safe area can be used to alert the behavior of moving the smart storage device 20 beyond the range.
  • the security module 410 may terminate the alert based on a setup request by the trusted user requesting that the alert be dismissed.
  • the specific content of the alarm information please refer to other parts of this specification, such as the user verification sub-module and its description of the security module 310 in FIG. 32 , which will not be repeated here.
  • the position monitoring sub-module can further improve the use safety of the smart storage device by monitoring the position of the smart storage device and warning against unauthorized movement behaviors.
  • the management module 420 is configured to obtain item identification information of the item management area during the trusted user's visit, the item identification information includes item information and/or item access information; based on the item identification information, determine index information, and the index information is at least based on the access information Sure.
  • the management module 420 further includes an identification sub-module 421 ; the identification sub-module 421 is configured to determine item identification information based on the detection information acquired by the detection module 220 .
  • the identification sub-module 421 please refer to other parts of this specification, such as the identification sub-module 321 in FIG. 32 and its description, and step 2903 in FIG. 34 and its description, which will not be repeated here.
  • the management module 420 further includes an authentication submodule 422; the authentication submodule 422 is configured to determine the ownership information of the item based at least on the access information and the detection information obtained by the detection module 220.
  • the term "ownership information" is used to indicate the ownership relationship between an item and a user.
  • the ownership information of the item can be determined based on the access information and the item identification information determined based on the detection information of the identification sub-module 421 .
  • the authentication submodule 422 may include a first authentication unit configured to determine the ownership information of the deposited item based on the access information and the detection information acquired by the detection module 220 .
  • the ownership information of the deposited item may be determined based on the access information and the item identification information determined based on the detection information of the identification sub-module 421 . Specifically, based on the access information of the item in the item identification information, it can be determined whether there is an item stored in the item management area of the smart storage device 20 during the access period.
  • the first authentication unit may analyze the coincidence of the depositing time of the item and the unlocking period of the user based on the unlocking time and locking time in the access information and the depositing time of the item in the item identification information, It is determined that the user whose access period coincides with the deposit time of the item is the owner of the deposited item.
  • step 3002 in FIG. 35 For the specific content of determining the ownership information of the deposited item, please refer to other parts of this specification, such as step 3002 in FIG. 35 and its description, which will not be repeated here.
  • the authentication sub-module 422 may include a second authentication unit configured to acquire historical ownership information of items in the item management area of the smart storage device 20; The detection information and historical ownership information to determine the ownership information of the items taken out.
  • the ownership information of the retrieved item may be determined based on the item identification information and historical ownership information determined by the identification sub-module 421 .
  • the historical ownership information includes the period from the initialization state of the smart storage device 20 (such as the state of the first use of the smart storage device 20, the state of data reset, and the state of data loss) to the period when the lock body 230 entered the locked state last time. Ownership information of all deposited items.
  • the historical ownership information can be stored in the storage device, and the first authentication unit can update the historical ownership information based on the ownership information of the currently deposited item after the lock body 230 enters the locked state from the unlocked state.
  • the second authentication unit is further configured to judge whether the user taking out the item is the owner of the item, and send an alarm message if the judgment result is no.
  • the alarm information sent in response to judging that the user is not the owner of the item taken out can be used to warn the non-owner of the item taking behavior.
  • the authentication sub-module can increase the dimension of item information and expand the usage scenarios by authenticating the deposited items and determining the ownership information.
  • the authentication sub-module authenticates the items taken out and warns the behavior of non-owners' access to the items, so that the smart storage device can take into account sharing and security in multi-person usage scenarios.
  • the management module 420 further includes an index submodule 423; the index submodule 423 is configured to determine index information based on access information, item identification information and ownership information.
  • the item identification information includes item information and/or access information of the item.
  • the index information is used to indicate the situation of the trusted user accessing the item in the item management area during the access period, and the ownership relationship between the trusted user and the accessed item.
  • the index sub-module establishes index information based on access information, item identification information, and ownership information, which can realize information statistics, management, and query of personal items in multi-person usage scenarios, realize multi-dimensional management of item information, and adapt to security requirements. High application scenarios.
  • Fig. 34 is an exemplary flowchart of index information management according to some embodiments of the present specification.
  • Process 2900 may be performed by system 40 .
  • process 2900 may be implemented as a set of instructions (eg, an application program) stored in storage device 180 .
  • the processor 260 and the modules in FIG. 31 , FIG. 32 , and FIG. 33 may execute at least a part of the set of instructions, and when executing at least a part of the instructions, the processor 260 , these modules may be configured to execute the process 2900 .
  • the operations of process 2900 shown below are for illustration purposes only. In some embodiments, process 2900 may be accomplished with one or more additional operations not described and/or one or more operations not discussed herein. Additionally, the order of the operations of process 2900 as shown in FIG. 34 and described below is not intended to be limiting.
  • Step 2901 obtain a user's request for accessing the item management area.
  • Step 2901 may be executed by the processor 260 (such as the security module 310, 410).
  • the processor 260 may obtain the user's access request by communicating with the terminal device 160 . In some embodiments, the processor 260 may obtain the user's access request by communicating with the input module 210 of the smart storage device 20 .
  • Step 2902 based on the user's access request, perform security verification on the user's identity, and generate access information, and the item management area is a closable space.
  • Step 2902 may be performed by processor 260 (eg, security module 310, 410).
  • step 2902 may include a user verification step.
  • the access request carries user information corresponding to the user's identity, and the user verification step can judge whether the user is a trusted user based on the user's user information, and send an alarm message when the judgment result is no.
  • the user verification step may be performed by the processor 260 (such as the user verification sub-module 411).
  • the user verification step further includes: acquiring user information of the user and user information of a preset trusted user.
  • the processor 260 may simultaneously obtain the user information used for user security verification, and the specific type of the obtained user information is determined by the pre-registered user name of the preset trusted user in the storage device 180 .
  • Information decides.
  • the user information of the pre-recorded preset trusted user includes digital password information and voiceprint information, then the processor 260 obtains at least one of the user's digital password information and voiceprint information through the input module 210 or the terminal device 160, to Perform security verification.
  • the user verification step further includes: comparing the user information of the user with that of a preset trusted user, and judging whether the user is a trusted user based on the comparison result.
  • the comparison may be based on one type of user information. For example, compare the fingerprint information of the user with the fingerprint information of the preset trusted user, and determine the similarity between the two through fingerprint similarity analysis. If the similarity is higher than the threshold (such as 95%, 98%, 99%), the user can be judged as trusted. user, otherwise it is determined that the user is an untrusted user.
  • the comparison can be made based on various information in the user information.
  • the processor 260 can perform face recognition analysis based on the user's face information and the preset trusted user's face information to obtain the first matching degree, which can be based on the user
  • the iris information of the preset trusted user and the iris information of the preset trusted user perform iris recognition and analysis to obtain the second matching degree.
  • the processor 260 scores the first matching degree and the second matching degree based on the scoring model, and calculates the total matching score. If the score is higher than the threshold (such as 90 points, 95 points, 99 points), it is judged that the user is a trusted user, otherwise it is judged that the user is an untrusted user.
  • the threshold such as 90 points, 95 points, 99 points
  • the user verification step further includes: sending out an alarm message when the judgment result is negative.
  • the alarm information includes local alarm information and remote alarm information.
  • access information For details about the access request, access information, user information, and alarm information, please refer to other parts of this specification, such as the security module 310 in FIG. 32 and its description, which will not be repeated here.
  • step 2902 also includes a location monitoring step.
  • the article management area is located in the movable storage device, and the position monitoring step can monitor whether the position of the article management area is in the safe area, and send an alarm message when the judgment result is no.
  • the location monitoring step may be performed by the processor 260 (eg, the location monitoring sub-module 412).
  • the processor 260 eg, the location monitoring sub-module 4112.
  • the processor 260 can obtain location information through the location module 240 of the smart storage device 20 .
  • the processor 260 can acquire the location information of the smart storage device 20 in real time through the positioning module 240 .
  • the processor 260 may periodically receive location information through the positioning module 240, for example, the positioning device sends the location information to the processor 260 every 10 minutes, 15 minutes or 30 minutes.
  • the processor 260 obtains the location information through the positioning module 240 when determining that the trigger condition is met.
  • the trigger condition for acquiring location information may be that the processor 260 receives an access request.
  • the processor 260 when the processor 260 judges that the location of the smart storage device 20 is outside the safe area, it can send a local alarm message through the output module 250 of the smart storage device 20, such as sending a voice message through a microphone, and sending it to the outside. Return to the safe area or ask the trusted user to dismiss the alert, etc.
  • the processor 260 when the processor 260 judges that the location of the smart storage device 20 is outside the safe area, it can send a remote alarm message through the terminal device 160 of the trusted user, such as sending a graphic and text alarm through the display of the terminal device 160, prompting The current location of the trusted user's smart storage device 20 and the reminder that if the movement of the current location of the smart storage device 20 is trustworthy, the alarm can be disarmed through the settings of the trusted user.
  • a remote alarm message through the terminal device 160 of the trusted user, such as sending a graphic and text alarm through the display of the terminal device 160, prompting The current location of the trusted user's smart storage device 20 and the reminder that if the movement of the current location of the smart storage device 20 is trustworthy, the alarm can be disarmed through the settings of the trusted user.
  • Step 2903 acquiring item identification information of the item management area during the trusted user's visit, where the item identification information includes item information and/or item access information.
  • Step 2903 can be executed by the processor 260 (such as the management module 320, 420).
  • the processor 260 may obtain detection information of the item management area; based on the detection information, determine item identification information of the item management area during the user's visit. Wherein, the detection information can be acquired through the detection module 220 of the smart storage device 20 . In some embodiments, the item information includes at least the attributes of the item and/or the value of the item.
  • step 2903 further includes an access determination step.
  • the access determination step may determine access information of the items in the item management area during the user's visit based on the probe information.
  • the access determination step may be performed by the processor 260 (eg, the access determination unit of the identification sub-module 321, 421).
  • the item's access information may indicate the item's access status, including deposited, withdrawn, and owned.
  • the item's access information may indicate the item's access time, including deposit time and withdrawal time.
  • the processor 260 may determine the access information of the item after the item management area is closed. For example, the processor 260 controls the lock body 230 of the smart storage device 20 to perform an unlocking operation, so as to allow the trusted user to access the item management area, and the trusted user deposits and/or takes out items. After the lock body 230 of the smart storage device 20 is locked , the item management area is closed, and the processor 260 can perform image recognition and analysis based on the image data at the time point when the smart storage device 20 is unlocked and when it is closed, and determine that there is an access state in the item management area of the smart storage device 20 as deposit and/or removed items.
  • the identifying step further includes an attribute determining step.
  • the attribute determining step may determine the attribute of the item in the item management area during the user's visit based on the probe information.
  • the attribute determining step may be performed by the processor 260 (such as an attribute determining unit).
  • the processor 260 may determine the attribute of the item through a machine learning model based on the detection information.
  • the machine learning model may include but not limited to a neural network model, a decision tree model, a support vector machine model, etc. or any combination thereof.
  • the processor 260 can call the attribute recognition model stored in the storage device 180, the attribute recognition model is generated by using the labeled detection information set as a sample to train the machine learning model, the attribute recognition model can be based on the image data of the item or the laser The scanned data is used as input to identify the type and material of the item.
  • the processor 260 may determine the attribute parameters of the item through a parameter calculation model based on the detection information, and determine the attribute of the item based on the attribute parameters of the item. For example, the processor 260 can call the density calculation model stored in the storage device 180, calculate the density of the item based on the pressure sensing data and the ultrasonic sensing data, and determine the material (such as metal, jade, etc.) of the item through the density of the item. In some embodiments, the processor 260 may determine the item attribute of the item through an image-text recognition model based on the detection information.
  • an item may carry a graphic and text mark indicating the attribute of the item (such as a text mark indicating that the type of the item is a ring, the material is gold, and the specification includes a weight of 15g and a diameter of 2 cm), and the processor 260 may call the graphic and text mark stored in the storage device 180.
  • Recognition model (such as OCR model), which determines the type, material and specification of the item by obtaining the image data with graphic and text identification of the item.
  • the identifying step further includes a value determining step.
  • the value determining step may determine the value of the items in the item management area during the user's visit based on the probe information.
  • the value determination step may be performed by a processor 260 (eg, a value determination unit). In a case where it is determined by the access determination step that the deposited item exists in the item management area, the value of the deposited item may be further determined.
  • the specific method of determining the value of the deposited items can be selected according to the attributes of the deposited items.
  • the step of determining the value may further include a step of determining the value of the deposited item through a network search.
  • the processor 260 performs a network search based on the image data of the stored items to determine candidate items whose matching degree is higher than one or more thresholds.
  • the processor 260 displays the one or more items to be selected through the display of the smart storage device 20 or the display device of the terminal device 160 of the trusted user, and the trusted user determines from the one or more items to be selected target reference item. If the trusted user does not determine the target reference item within a predetermined time (eg, 1 minute), the processor 260 automatically determines the candidate item with the highest matching degree as the target reference item.
  • Processor 260 may determine the value of the deposited item based on the retrieved market value of the target reference item.
  • the step of determining the value may further include a step of determining the value of the deposited item by searching a local database.
  • the local database is stored in storage device 180 .
  • the local database may contain one or more pre-stored reference item information including at least image data and a value of the item. Wherein, the value of the reference item can be set by the trusted user.
  • the steps of determining the value of the item through the local database search are similar to the steps of determining the value of the item through the network search, and will not be repeated here.
  • the step of determining the value may further include the step of determining the value of the item based on user input.
  • the processor 260 may display prompt information through the display of the smart storage device 20 or the terminal device 160, for prompting the user to manually Sets the value of the deposited item.
  • the processor 260 determines the value of the deposited item based on the acquired setting request on value setting of the trusted user (the setting request carries value information of the item).
  • the process 2900 may also include an authentication step.
  • the authentication step can determine the ownership information of the items in the item management area during the trusted user's visit based on the access information and/or the detection information of the item management area.
  • the authentication step can be performed by the processor 260 (such as the authentication sub-module 422).
  • the processor 260 such as the authentication sub-module 422).
  • Step 2904 Determine index information based on the item identification information, where the index information is at least determined based on the access information. In some embodiments, the index information is also determined based on the ownership information determined in the authentication step. Step 2904 can be executed by the processor 260 (such as the indexing submodule 322 of the management module 320, or the indexing submodule 423 of the management module 420).
  • step 2904 may further include an index generation step.
  • index information may be created based at least on access information, item identification information and preset rules. In some embodiments, the index information is also created based on the ownership information determined in the authentication step.
  • the index generating step may be performed by the processor 260 (such as the index generating unit of the index sub-module 322, 423). For the case where no index information has been created, and when data loss, data corruption or data reset of the storage device 180 makes the created index information unable to be updated and/or queried, the index generation step may create or rebuild index information. It should be noted that the processor 260 may create index information when there is no item access in the item management area.
  • the preset rule may include an extraction rule for extracting information of associated items from the information to be counted, and the information to be counted includes at least access information and item identification information; the index generation step further includes extracting information based on the extraction rule from the information to be counted Steps to extract information.
  • the extraction rule can be: extract the name, image, attribute, value, access status, access time, and user identification (such as user ID, user name) of the user who accesses the item from the information to be counted.
  • the information to be counted also includes ownership information determined in the authentication step.
  • the preset rules may also include statistical rules for counting the information of related items, and the index generating step further includes a step of counting the extracted information based on the statistical rules to generate index information.
  • the statistical rule can be: classify the extracted information according to different owners, and count the names, attributes, access status, access time, and total value of items currently stored with each owner.
  • the preset rule may also include a query rule for querying index information based on a query request; the index generating step further includes a step of determining a query method for index information based on the query rule.
  • the query rule may be: querying index information by using one or more of access time, owner, property of the item, and name of the item as filter conditions.
  • the preset rule may also include a display rule for displaying corresponding index information based on the query request; the step of generating the index further includes a step of determining a display mode of the index information based on the display rule.
  • the display rule may be: displaying index information with one of item-based sorting, access-time-based sorting, and owner-based sorting as a display condition.
  • the display manner may include sorting based on items, displaying based on access time and displaying based on owner.
  • the indexing step may also include an index updating step.
  • the index updating step may update the index information based on at least the access information and the item identification information when the created index information is detected.
  • the index information is also updated based on the ownership information determined in the authentication step.
  • the index updating step can be performed by the processor 260 (such as the index updating unit of the index sub-module 322, 423).
  • the processor 260 may call the created index information stored in the storage device 180 and update the created index information.
  • one or more operations of adding, modifying and deleting are used to update the index information entries associated with the currently deposited items and/or withdrawn items in the index information.
  • the index information may include a plurality of index items, each index item being a collection of information associated with a specific item.
  • Each index item may include a plurality of index information items, such as item name, attribute, value, access status, deposit time, withdrawal time, user ID of the depositing user, user ID of the withdrawing user, and the like.
  • the processor 260 uses a modification operation when updating the index information entry.
  • the index information entry associated with the item including the access status is updated in a modified operation mode, and the item is updated in a newly added operation mode, including the time of taking out, and the user ID of the user who took out the item.
  • the index information entry in the item is updated by deleting the index information entry including the value associated with the item.
  • the indexing step may also include a query display step.
  • the query display step may perform query display based on the index information.
  • the query displaying step may further determine index information corresponding to the query request based on the query request of the trusted user, and display the index information corresponding to the query request.
  • the query display step may be performed by the processor 260 (eg, the query display unit of the indexing sub-module 322, 423).
  • the query request carries filtering conditions for querying index information and display conditions for displaying index information.
  • the item attribute includes gemstones as the filtering condition
  • the sorting based on the owner is used as the display condition
  • the processor 260 determines the index information corresponding to the filtering condition based on the trusted user's query request
  • the The determined index information may include one or more index items associated with gemstones of item attributes, and the index information entry of each index item may include name, attribute, value, access time and owner;
  • processor 260 is based on the trusted user's query Request, determine to display the determined index information in a sorted manner based on the owner, that is, classify and arrange all index items according to the owner, and display the name, attribute, value and access time.
  • Fig. 35 is an exemplary flowchart of authentication steps according to some embodiments of this specification.
  • the process 3000 may include step 3001 to step 3004 .
  • Step 3001 based on the detection information, confirm whether there are items deposited and withdrawn from the item management area during the trusted user's visit. In some embodiments, whether there are deposited items and withdrawn items in the item management area can be confirmed based on item identification information. Wherein, the item identification information is determined based on the detection information acquired by the detection module 220 in step 2903 . Step 3001 can be executed by the processor 260 (such as the access determination unit of the identification sub-module 321, 421). In some embodiments, after determining the access information of the items in the item management area, the processor 260 can confirm whether there are deposited items and withdrawn items.
  • step 3002 based on the access information and the detection information, determine the ownership information of the deposited items.
  • step 3002 may be performed by the processor 260 (such as the first authentication unit of the authentication sub-module 422).
  • the ownership information of the deposited item may be determined based on the access information and the item identification information determined based on the detection information in step 2903 .
  • step 3002 may further include: based on the deposit time of the item and the user's access time, determining the overlapping deposit and access time; determining the ownership relationship between the user corresponding to the overlapping deposit and access time and the corresponding deposited item.
  • Deposit and access overlapping time refers to the overlapping time point or time period between item deposit and user access.
  • the processor 260 establishes a timeline based on the deposit time of the item and the user's unlock time and lock time.
  • the storage time may be a storage time stamp determined based on a change time of detection information (such as pressure sensor data).
  • the processor 260 determines the deposit-access coincidence time during the unlocking period that coincides with the deposit-in time stamp, and determines that the user corresponding to the deposit-access coincidence time is the owner of the deposited item corresponding to the deposit-access coincidence time.
  • step 3002 further includes determining a joint owner of the deposited item based on user input, wherein the user is a trusted user and the user is the owner of the deposited item.
  • the ownership of the same item can be jointly owned by multiple people.
  • the processor 260 receives a user's setting request, requesting to set one or more other trusted users as the joint owner of the deposited item; the processor 260 determines whether the user who sent the setting request is the owner of the deposited item ; When the judgment result is yes, based on the setting request, it is determined that the one or more other trusted users are joint owners of the deposited items.
  • step 3003 historical ownership information of items in the item management area is obtained.
  • step 3003 may be performed by the processor 260 (such as the second authentication unit of the authentication submodule 422).
  • processor 260 retrieves historical title information stored in storage device 180 . It can be understood that the ownership information of a specific historically deposited item in the historical ownership information may indicate the ownership information of the currently withdrawn item.
  • Step 3004 based on the detection information and historical ownership information, determine the ownership information of the retrieved item.
  • step 3004 may be executed by the processor 260 (such as the second authentication unit).
  • ownership information of the retrieved item may be determined based on item identification information and historical ownership information.
  • the processor 260 may determine the historically deposited items that match the information of the currently withdrawn items, and determine the ownership information of the currently withdrawn items based on the ownership information of the historically deposited items.
  • information matching may include name matching, category matching, image matching, etc. of items, or any combination thereof.
  • step 3004 further includes judging whether the user who took out the item is the owner of the item, and sending an alarm message if the judging result is no. Specifically, there may be situations where the trusted user who takes out the item is not the owner of the item. When this happens, the owner or other related personnel will be alerted, which can improve the security of item management.
  • processor 260 determines a trusted user for item retrieval based on the access information and item identification information. The method of determining the trusted user to take out the item is similar to step 3002, and the specific content can refer to the description of step 3002, which will not be repeated here.
  • the processor 260 determines that the trusted user taking out the item is not the owner of the item, and sends an alarm message.
  • Fig. 36 is an exemplary application scenario diagram of an index information management system according to some embodiments of this specification.
  • the user wakes up the index information management system in the standby state through the input module of the smart storage device and sends an access request, requesting the lock body of the smart storage device to perform an unlocking operation to allow the user to access the item management area, the access request Carry user information indicating user identity for security verification.
  • the security module of the index information management system confirms whether the current user is a trusted user based on the user information carried in the access request.
  • the safety module sends an alarm message through the display of the intelligent storage device to prompt the user to fail the verification, and needs to perform the safety verification again, and waits for the user to re-enter the user information to carry out the next round of user verification; If yes, the security module controls the lock body of the intelligent storage device to perform an unlocking operation, and generates access information.
  • the item management area is closed, and the management module of the index information management system calls the detection information of the items in the item management area collected by the detection module of the smart storage device, and calls the information generated by the security module.
  • Access information for item identification, authentication and index information update is updated.
  • the identification sub-module of the management module determines item identification information (including item information and item access information) based on the detection information, and judges based on the item identification information whether to deposit and/or withdraw items from the item management area during user access (during unlocking).
  • the index information management system When the identification sub-module judges that there is no deposit or withdrawal, the index information management system returns to the standby state and waits for the user to wake up.
  • the identification sub-module When the identification sub-module judges that there are deposited items, it jumps to the authentication sub-module of the management module to authenticate the deposited items: the authentication sub-module calls the item identification information determined by the identification sub-module and the access information generated by the security module, Information used to determine ownership of deposited items. After the authentication sub-module completes the authentication, it jumps to the index sub-module of the management module to update the index information.
  • the identification sub-module judges that there is an item to be taken out, it jumps to the index sub-module of the management module to update the index information.
  • the index sub-module of the management module calls the identification information of the item determined by the identification sub-module, the ownership information of the deposited item determined by the authentication sub-module, and the access information generated by the security module to update the index information of the associated deposited item; the index sub-module The module invokes the item identification information determined by the identification sub-module and the access information generated by the security module to update the index information associated with the retrieved items. After the index sub-module of the management module finishes updating the index information, the index information management system returns to the standby state and waits for the user to wake it up.
  • the smart device may include a smart storage device, and the user may remotely monitor and view the security situation outside the smart storage device.
  • the trigger information may include security information
  • the related information may include video information of a third preset area outside the management area
  • the risk level of the management area may be determined based on the security information and the video information of the third preset area.
  • the risk level can reflect the probability of danger in the management area, for example, the higher the risk level, the greater the probability of danger in the management area.
  • the smart device may send a reminder message.
  • the reminder information may include but not limited to push information, alarm information and the like.
  • push information may include but not limited to video information, comprehensive monitoring information, prompt information, and the like.
  • the alarm information may include, but not limited to, warning lights flashing, prompting sounds, vibrations, and the like.
  • the alert message may also include alerting law enforcement.
  • the smart device can send the reminder locally. In some embodiments, the smart device can issue a local alert. In some embodiments, the smart device can remotely send reminder information to the associated terminal device. In some embodiments, the smart device can remotely send an alarm message to the terminal device, causing the terminal device to vibrate or emit a prompt sound.
  • the smart device may remotely send the relevant information collected by the collection device to the terminal device.
  • the monitoring range of smart devices can be expanded, and risks can be predicted in advance, so that users can take timely countermeasures.
  • the third preset area may be a monitoring area within a preset range (for example, may be set by a user) away from the smart device.
  • the preset range may be a circular range centered on the smart device and within a radius of 10 meters.
  • the smart device may include the third preset area, but not include the first preset area and the second preset area.
  • the smart device may also include a first preset area, a second preset area and a third preset area at the same time, and the third preset area may partially overlap with the first preset area and the second preset area Or not overlapping, for example, the third preset area may be a monitoring area with a larger range around the smart device.
  • the control system can also obtain warning information in a fourth preset area outside the smart device management area; the risk level is also determined based on the warning information.
  • the warning information may include the external environment, for example, surrounding security, traffic conditions, accidents and so on.
  • the fourth preset area may not overlap with the first preset area, the second preset area and the third preset area.
  • the fourth preset area may be a monitoring area within a certain range farther from the smart device than the first preset area, the second preset area and the third preset area.
  • the fourth preset area may be a community area or a city area of a home where the smart device is located.
  • the warning information may include risk personnel
  • the smart device may obtain images or images of risk personnel from the network, and the smart device may process relevant information to determine whether there are risk personnel.
  • the risk level can be set to the highest level, and a reminder message can be sent.
  • a smart device for example, a smart lock
  • multiple collection devices are separately arranged on the door, the structure on the door will be complicated. Therefore, multiple collection devices can be integrated into the smart device (for example, smart lock).
  • a smart device may include a multimedia device.
  • the smart device may include multiple collection devices, and the multiple collection devices may also be referred to as multimedia collection devices.
  • the multimedia device may include: a first body disposed on the smart device; at least one target structure, the target structure disposed on the first body; wherein, the target structure at least includes a multimedia acquisition device, and the multimedia acquisition device is used for Relevant information (for example, multimedia data) of the target area is collected, and the target area corresponds to the smart device.
  • a multimedia acquisition device capable of collecting multimedia data (for example, video information) can be integrated on the smart device , thereby simplifying the structure of the smart device.
  • the first body is disposed on the smart device, and at least one of the one or more collection devices is disposed on the first body, and the smart device and the collection device are integrated, thereby simplifying the structure of the smart device.
  • Fig. 37 is a schematic structural diagram of a multimedia device shown in some embodiments of this specification.
  • Fig. 38 is an application example diagram of a multimedia device shown in some embodiments of this specification;
  • Fig. 39 is an application example diagram of a multimedia device shown in other embodiments of this specification.
  • the multimedia device is set on the smart device through the first body 101 , and the smart device can be used in various security protection scenarios.
  • the multimedia device can be set on a door, a safe deposit box, a smart lock on a smart screen, and the like.
  • the multimedia device may include the following structures:
  • the first body 101 is set on the smart device X.
  • At least one target structure 102 is provided on the first body 101 .
  • the first body 101 can be understood as a structure supporting a frame or a supporting plate, for supporting the target structure 102 to be stably arranged on the smart device X.
  • the target structure 102 includes at least a multimedia collection device 121, and the multimedia collection device 121 is used to collect multimedia data in a target area 103, and the target area 103 corresponds to a smart device. As shown in FIG. 38 , the target area 103 is the management area of the smart device X. As shown in FIG.
  • a support frame for supporting the video module is set on the smart lock x, as shown in Figure 39, so that the video module can be directly integrated on the smart lock x without having to A video module is set separately on the door to collect video data, thereby reducing the structural complexity of the door.
  • the multimedia device provided by some embodiments of this specification, by setting the multimedia device on the smart device, and configuring one or more target structures on the multimedia device, such as a multimedia acquisition device, etc., it is possible to integrate the smart device capable of collecting multimedia data.
  • the multimedia acquisition device thereby achieving the purpose of simplifying the structure of the components where the smart device is located.
  • Fig. 40 is a schematic structural diagram of a multimedia device shown in some other embodiments of this specification.
  • the first body 101 may include at least two opposite sides to form a first space, and at least one of the one or more collection devices is disposed in the first space.
  • the first body 101 may include a first side 111 and a second side 112, a first space 113 is formed between the first side 111 and the second side 112, and the first side 111 is Connect to the side of the smart device X.
  • a first through hole 114 is disposed on the second side 112 .
  • the second side 112 can be realized with an acrylic ring.
  • the first space 113 may be an inner hollow space structure formed by an interval between the first side 111 and the second side 112 .
  • the first space 113 may be a cylindrical space structure, or a square space structure, and the shape of the space structure is related to the shapes of the first side 111 and the second side 112 .
  • one or more target structures 102 such as a multimedia collection device 121 , may be set in the first space 113 .
  • the multimedia collection device 121 disposed in the first space 113 can collect multimedia data of the target area 103 through the first through hole 114 .
  • Fig. 41 is a schematic structural diagram of a multimedia device embedded in a smart device shown in some embodiments of this specification
  • Fig. 42 is a schematic structural diagram of a multimedia device embedded in a smart device shown in other embodiments of this specification.
  • the first body 101 can be partially embedded into the body of the smart device X through the first side 111 .
  • the second side 112 may be higher than the body surface of the smart device X by a preset distance.
  • the first body 101 can be completely embedded into the body of the smart device X through the first side 111 , at this time, the second side 112 is on the same plane as the surface of the body of the smart device X .
  • the space structure formed by the support frame is partially embedded in the middle frame of the smart lock through the inner side, and the outer side of the space structure formed by the support frame protrudes relative to the surface of the middle frame of the smart lock to play a role
  • the function of prompting; or, the surface of the middle frame of the smart lock on the outer side of the space structure formed by the support frame is flat, so as to play an aesthetic role and so on.
  • Fig. 43 is a schematic structural diagram of a multimedia device shown in some other embodiments of this specification. Based on the structure shown in FIG. 40 , the multimedia data collected by the multimedia collection device 121 may be picture data, such as image, video and other multimedia data. In some embodiments, the multimedia acquisition device 121 in the first body 101 includes at least the following structure, as shown in FIG. 43 :
  • the lens holder 701 is arranged in the first space 113; the lens protection member 702 is arranged on the first through hole 114, and a second space 703 is formed between the lens protection member 702 and the lens holder 701; the lens 704 is arranged in In the second space 703 , the lens 704 captures image data of the target area through the lens protection member 702 disposed on the first through hole 114 .
  • the lens fixing part 701 and the lens protecting part 702 are fixedly connected by an adhesive, wherein the adhesive may be colloid for fixing.
  • the lens protection part 702 can be a transparent structure, such as lens protection glass, so that the lens 704 can clearly and accurately collect image data of the target area through the lens protection part 702 .
  • the lens installed in the support frame is fixedly connected to the support frame through a fixture, and the lens protection glass is configured on the outer acrylic ring, so that the lens in the support frame can pass through the hole in the acrylic ring.
  • the lens protection glass collects the picture data of the target area.
  • the first through hole 114 may be set to have the same size as the capture lens of the multimedia capture device 121 , and the lens is a component used to capture multimedia data. In some embodiments, the first through hole 114 may be configured to have a size larger than that of the lens of the multimedia collection device 121 .
  • the lens holder 701 has a first size parameter
  • the lens protector 702 has a second size parameter
  • the lens 704 has a third size parameter
  • both the first size parameter and the second size parameter are greater than the third size parameter
  • the size of the lens mount and the lens protection glass installed in the support frame are larger than the size of the lens itself, so the lens has a larger size in the eyes of users outside the door lock, thus , to present a visually enlarged structure to the module where the lens is located, so that the small lens element has the effect of a large lens, so as to remind the user that there is a lens to capture the picture here.
  • Fig. 44 is a schematic structural diagram of a multimedia device shown in some other embodiments of this specification.
  • the multimedia device in some embodiments may also include the following structure, as shown in FIG. 44 : the human sensor 104 is set in the first space 113; A second through hole 115 is provided so that the human body sensor 104 monitors whether a human body appears in the target area through the second through hole 115 .
  • Fig. 45 is a schematic structural diagram of a multimedia device shown in some other embodiments of this specification.
  • the human body sensor 104 may be an infrared sensor (Passive InfraRed, PIR).
  • a Fresnel lens 116 may be provided on the second through hole 115 to function as a protective component, as shown in FIG. 45 .
  • the human body sensor 104 can be used to monitor the human body passing through the target area corresponding to the multimedia device.
  • a PIR is also set, and the PIR is used to accurately sense whether someone has passed the area corresponding to the door lock.
  • the second through hole 115 is located at a location on the second side 112 that is spaced from the first through hole 114 at the second side 112 .
  • the first through hole 114 is arranged on the central region of the second side 112
  • the second through hole 115 is arranged on the edge region on the second side 112
  • the human sensor 104 corresponding to the second through hole 115 is located in the first space 113, but does not affect each other.
  • Fig. 46 is a schematic structural diagram of a multimedia device shown in some other embodiments of this specification.
  • the human body sensor 104 and the multimedia collection device 121 can be connected, as shown in FIG.
  • the multimedia collection device 121 is triggered to collect multimedia data in the target area, such as image data and/or audio data.
  • a connection interface can be set between the human body sensor 104 and the multimedia acquisition device 121, so that the human body sensor 104 generates a trigger instruction when a human body appears in the target area, and then sends the trigger instruction through the connection interface. Send it to the multimedia collection device 121 to trigger the multimedia collection device 121 to collect multimedia data in the target area.
  • the PIR installed in the support frame senses someone passing by the area corresponding to the door lock, it sends an instruction to the lens in the support frame to trigger the lens to start video capture of the area corresponding to the door lock.
  • Fig. 47 is a schematic structural diagram of a multimedia device shown in some other embodiments of this specification.
  • the target structure 102 may also include the following structure: a prompt triggering device 122, which is used to output a notification message by means of sound and/or light when triggered, such as a door A locked doorbell that alerts the user with an audible sound when pressed.
  • the prompt trigger device 122 is disposed on the second side 112, and the first light source 105 is disposed in the first space 113, and the position of the first light source 105 in the first space 113 Corresponding to the prompt triggering device 122;
  • the human body sensor 104 is connected to the first light source 105, so that the human body sensor 104 triggers the first light source 105 to emit light in the target flash mode when a human body is detected in the target area, so as to prompt the trigger device 122 is located on the second side 112.
  • the first light source 105 is used to remind the user of the area where the prompt trigger device 122 is located, so that the user can operate the prompt trigger device 122 .
  • the first light source 105 may be a breathing light that can blink in different colors and/or in different blinking ways, so as to achieve the function of prompting.
  • the first light source 105 in order to save power, when the remaining battery power of the smart device is lower than the threshold, the first light source 105 does not work; when the remaining battery power of the smart device is higher than the threshold, the first The light source 105 works normally, that is, under the trigger of the human body sensor 104 , the first light source 105 emits light in a target flash mode to remind the area where the trigger device 122 is located on the second side 112 .
  • the PIR and the doorbell are installed on the smart lock, but also the breathing light corresponding to the position of the doorbell is set on the door lock.
  • the PIR senses that someone passes the area corresponding to the door lock, it not only sends Command to trigger the camera to start video capture of the area corresponding to the door lock, and also control the breathing light to flash to indicate the location of the doorbell, so as to remind people that they can press the doorbell.
  • the breathing light when the battery power of the smart lock is insufficient, the breathing light does not flash when the doorbell is pressed; when the battery power of the smart lock is sufficient, the doorbell breathing light is triggered when the user reaches the wake-up distance of the PIR, and the doorbell
  • the breathing light can have a breathing white effect, such as multiple cycles of flashing: on-off-on cycle, another example, gradual flashing: on for 1 second and off for 1 second, on and off for a cycle of 2s, and so on.
  • the prompt triggering device 122 may also be arranged in other areas on the multimedia device or the smart device.
  • Fig. 48 is a schematic diagram of the setting positions of the smart lock doorbell shown in some embodiments of this specification.
  • the doorbell on the door lock can be arranged on the acrylic ring (adjacent to the protective glass of the lens), and can also be arranged on other areas of the middle frame of the door lock, such as Above the fingerprint collection area, below the fingerprint collection area, below the numeric keypad, etc. on the InMoldingLabel (IML) panel.
  • IML InMoldingLabel
  • the first light source 105 can be set to be in an off state, and the first light source 105 in the off state will not be triggered by the human body sensor 104 .
  • the breathing light on the smart lock used to indicate the location of the door lock can be compared by the user on the door lock application installed on the mobile phone.
  • Fig. 49 is a schematic structural diagram of a multimedia device shown in other embodiments of this specification.
  • the target structure on the first body 101 may also include the following structure, as shown in FIG. 49:
  • the face recognition device 123 may include a projector 1231 , a flood light source 1232 and a camera 1233 , and of course other components may also be included.
  • the projector 1231 is mainly used to emit light to the target area that needs face recognition
  • the flood light source 1232 is mainly used to supplement the light of the projector 1231
  • the light emitted by the projector 1231 is reflected to the camera after passing through the recognized face 1233 , that is, the camera 1233 collects the face image in the target area under the light emitted by the projector 1231 and realizes face recognition.
  • the projector 1231, the flood light source 1232 and the camera 1233 may all be arranged in the first space 113 formed by the first body 101, and the projector 1231, the flood light source 1232 and the camera 1233 are in the first space 113 It is arranged around the multimedia collection device 121 corresponding to the first through hole 114 , and the projector 1231 , the flood light source 1232 and the camera 1233 surround the multimedia collection device 121 in a targeted layout.
  • a PIR for example, on a smart lock, in addition to setting a lens, a PIR, and a face recognition device are set in the support frame, not only can the PIR accurately sense whether someone has passed the area corresponding to the door lock, but also can use the face recognition device to identify Recognition is performed by the face in the area corresponding to the door lock. Therefore, in the smart lock, the lens, PIR, doorbell, photosensitive element required by the camera, infrared supplementary light and other components are integrated into one module to achieve a modular design. By placing related functions together, users can use Experience improvement.
  • the distance between the flood light source 1232 and the projector 1231 is a first distance, and the first distance is smaller than a specific first threshold, so as to realize supplementary light of the flood light source 1232 to the projector 1231 .
  • the distance between the projector 1231 and the multimedia collection device 121 is greater than a specific target threshold, so that the light projected by the projector 1231 does not affect the collection of multimedia data by the multimedia collection device 121 .
  • 50 to 59 are schematic diagrams of the layout of components included in the face recognition device shown in some embodiments of this specification.
  • the projector 1231 , the flood light source 1232 , the multimedia acquisition device 121 and the camera 1233 may be on the same straight line a, as shown in FIG. 50 or FIG. 51 .
  • the projector 1231, the multimedia acquisition device 121, and the camera 1233 are on the same straight line b.
  • the flood light source 1232 may have a first distance from the straight line b, that is, the distance between the flood light source 1232 and the projector 1231
  • the connecting line is perpendicular to the straight line b, as shown in Fig. 52 and Fig. 53 .
  • the line between the flood light source 1232 and the projector 1231 forms an acute or obtuse angle with the straight line b, and the flood light source 1232 is at any position on an arc with the projector 1231 as the center and the first distance as the radius, As shown in Figure 54.
  • the flood light source 1232, the multimedia acquisition device 121, and the camera 1233 are on the same straight line c, based on this, the projector 1231 may have a first distance from the straight line c, that is, the distance between the flood light source 1232 and the projector 1231
  • the connecting line is perpendicular to the straight line c, as shown in Fig. 55 and Fig. 56 .
  • the line between the projector 1231 and the flood light source 1232 forms an acute or obtuse angle with the straight line c, and the projector 1231 is at any position on an arc with the flood light source 1232 as the center and the first distance as the radius, As shown in Figure 57.
  • the camera 1233 is on the first side of the multimedia acquisition device 121
  • the projector 1231 and the flood light source 1232 are on the second side of the multimedia acquisition device 121
  • the first side of the multimedia acquisition device 121 and the second side of the multimedia acquisition device 121 The two sides are symmetrical with respect to the multimedia collection device 121, and the relative position between the projector 1231 and the flood light source 1232 can be set arbitrarily, as long as the first distance is satisfied, as shown in Figs. 50-57.
  • the projector 1231, the flood light source 1232 and the camera 1233 are on the same straight line d, and the multimedia acquisition device 121 has a second distance from the straight line d, and the second distance is not 0, that is to say, the multimedia acquisition device 121 It is not on the same straight line d as the projector 1231 , the flood light source 1232 and the camera 1233 , as shown in FIG. 58 .
  • the projector 1231 , the flood light source 1232 and the camera 1233 are arranged in a triangle, and the multimedia collection device 121 is outside the triangle, as shown in FIG. 59 .
  • the projector 1231, the flood light source 1232 and the camera 1233 can be arranged arbitrarily around the multimedia acquisition device 121 in the first space 113, as long as the first distance between the projector 1231 and the flood light source 1232 is satisfied It is less than the first threshold, and the distance between the projector 1231 and the multimedia acquisition device 121 is greater than the target threshold.
  • FIG. 60 to 62 are schematic structural diagrams of multimedia devices shown in other embodiments of this specification.
  • the multimedia device in this embodiment may also include the following structure, as shown in FIG. 60:
  • the second light source 106 is connected to the first side 111 on the first body 101 .
  • the second light source 106 has at least a first flash mode, and the first flash mode is used to prompt that the multimedia acquisition device 121 is disposed on the first body 101 .
  • the second light source 106 can be fixed on the smart device through buckles and screws, and connected to the first side 111 of the first body 101 .
  • the second light source 106 can be arranged around the outer edge of the first side 111 of the first body 101 to realize the flashing mode in a ring structure, such as the light ring shown in FIG. 61 , so that the second light source 106 passes through the first A flash mode prompts the location of the multimedia collection device 121 on the smart device.
  • a light ring for example, on a smart lock, in addition to setting the lens, PIR, and face recognition device in the support frame, there is also a light ring.
  • the light ring is set around the edge of the support frame, and the acrylic ring on the outer side of the support frame is pasted on the On the light ring, and the light ring is fixed on the IML panel of the door lock with buckles and screws. It can remind the position area of the lens on the door lock through the corresponding light flashing mode. Of course, it may also remind the PIR and face recognition device. The location area you are in.
  • the target structure 102 may also include a verification device 124, as shown in FIG. 62 , which is connected to a smart device and used to verify the collected verification information to obtain a verification result.
  • a verification device 124 as shown in FIG. 62 , which is connected to a smart device and used to verify the collected verification information to obtain a verification result.
  • the verification result is connected to a collection component for collecting verification information, and is used to verify the verification information collected by the collection component to obtain a verification result.
  • the collection component may include any one or multiples of a fingerprint collector, a Near Field Communication (NFC, NFC) sensor, and a numeric keypad.
  • a verification processor is also set up, and the verification processor is connected with the fingerprint collector, NFC sensor and digital lock set on the door lock.
  • the keyboard is connected with each other.
  • the collection area of the fingerprint collector is set under the support frame where the lens is located
  • the numeric keypad is set under the fingerprint collector
  • an NFC sensor can also be configured in the same area of the numeric keypad.
  • the second light source 106 also has at least a second flash pattern and a third flash pattern, the second flash pattern is used to prompt the verification result to indicate that the verification information conforms to the verification rules, and the third flash pattern is used to prompt the verification result to represent the verification Information does not meet validation rules.
  • the verification rule may be: the verification information is consistent with preset legal information.
  • the fingerprint collected by the fingerprint collector is consistent with the preset fingerprint
  • the sensing information collected by the NFC sensor matches the preset NFC identification
  • the character string collected on the numeric keypad is consistent with the preset character string. etc.
  • the second light source 106 is connected with the verification device 124, and the second light source 106 prompts the user with the second flash mode when the verification device 124 finds that the verification information is consistent with the preset legal information, and when the verification device 124 finds that the verification information is consistent with the preset legal information When the preset legal information is inconsistent, the user is prompted with the third flashing pattern.
  • the second flash mode is different from the third flash mode, for example, the second flash mode is a steady green light mode, the third flash mode is a red light flashing mode, and so on.
  • the light ring will turn white and rotate clockwise or flash for several times; if the user unlocks with a fingerprint and the fingerprint recognition fails, the light ring will flash red and breathe Multiple times; if the user unlocks with a digital password and the digital password verification is passed, the light ring will turn white and rotate clockwise or flash multiple times; if the user swipes the NFC card to unlock and the NFC identification is passed, the light ring will turn white and turn clockwise Rotate or flash for multiple breaths; if the digital password or NFC recognition fails, the red light of the light ring flashes quickly for multiple breaths, and so on. Therefore, in this embodiment, the breathing reminder effect is realized by flickering the light ring in the door lock, the visual feedback is strengthened, and the user experience is improved.
  • FIG. 63 is a schematic structural diagram of a smart device shown in some embodiments of this specification. In combination with the multimedia structure shown in FIG. 37, some embodiments of this specification also provide a smart device.
  • FIG. 63 is a schematic structural diagram of an embodiment of a smart device provided in this specification. In addition to including In addition to the multimedia device, it also includes the following structure:
  • the second body 2401 such as a middle frame, can be arranged on components such as a base.
  • the lock body 2402, the lock body 2402 is arranged on the second body 2401, the lock body 2402 is used to lock the smart device X and the object corresponding to the smart device X.
  • the multimedia device 2403 is set on the second body 2401;
  • the structure of the multimedia device 2403 can refer to the structure shown in FIG. 37-FIG. 59. Taking FIG. 37 as an example, the multimedia device 2403 can include the following structure:
  • the first body 101 is set on the smart device X.
  • At least one target structure 102 is provided on the first body 101 .
  • the first body 101 can be understood as a structure supporting a frame or a supporting plate, for supporting the target structure 102 to be stably arranged on the smart device X.
  • the target structure 102 includes at least a multimedia collection device 121, and the multimedia collection device 121 is used to collect multimedia data in a target area 103, and the target area 103 corresponds to a smart device. As shown in FIG. 38 , the target area 103 is an area toward which the smart device X is directed.
  • Fig. 64 is a schematic structural diagram of the smart lock shown in some embodiments of this specification when it is locked.
  • the lock body can lock the door where the door lock is located and the corresponding door frame of the door lock, as shown in FIG. 64 .
  • Fig. 65 is a schematic diagram of the video module of the smart lock shown in some embodiments of this specification.
  • the smart lock includes a base 6509, a middle frame 6508, and an IML panel 6507.
  • structure, which includes a video module, and the video module can include: lens 6501, lens decoration 6502, light ring 6503, PIR (infrared sensor) 6504, lens protection glass 6505 (also can be other materials or structures), acrylic ring 6506.
  • Other functional parts can also be arranged in the internal structure formed by the acrylic ring 6506 and the middle frame.
  • the PIR 6504 can detect people through the Fresnel lens on the acrylic ring 6506.
  • the video module in the smart lock can be applied not only to the smart lock, but also to smart devices such as functional areas of storage devices, smart screens, and smart doors.
  • the acrylic ring 6506 is actually a functional area on which other functional modules can be arranged, such as doorbells, fingerprint recognition devices, face recognition devices, and various sensors.
  • Fig. 66 is a schematic layout diagram of an IR projector, an IR flood light source and an IR camera in a face recognition device for a smart lock shown in some embodiments of this specification.
  • the face recognition device can realize face recognition.
  • the face recognition device can include: multiple sensors and infrared sensors, such as an IR projector 6601, an IR flood light source 6602 and an IR camera 6603 Wait.
  • the two sensors can be set in the functional area of the acrylic ring, placed on one side of the lens to maintain a preset distance horizontally or axially; they can also be placed symmetrically on both sides of the lens, referring to Figures 50 to 59, Refer to Figure 66.
  • the infrared sensors may respectively surround the facial recognition sensor arrays.
  • the video module assembly relationship is as follows:
  • the lens 6501 can be installed on the bracket inside the middle frame 6508, the PIR 6504 and the lens decoration 6502 are fixed to the internal structure of the light ring 6503, the lens protection glass 6505 is fixed and embedded in the lens decoration 6502 with glue, and the acrylic ring 6506 is adhesively pasted On the light ring 6503.
  • the light ring 6503 is fixed to the IML panel 6507 with clips and screws.
  • the doorbell backlight can be turned off in the mobile APP connected to the smart lock.
  • the light ring will turn white and rotate clockwise or flash for several times;
  • the red light of the light ring flashes rapidly and breathes several times;
  • the light ring After the digital password and NFC identification are passed, the light ring will turn white and rotate clockwise or flash for several times;
  • the red light of the light ring flashes quickly and breathes several times; and so on.
  • the lens, PIR, doorbell, photosensitive element, infrared fill light, etc. are integrated into the video module, and the modular design; the related functions are placed together to improve the user experience.
  • the visual magnification structure of the lens module, the small lens element has a large lens effect, reminding the user that this product has a video function.
  • the light ring flickers with a breathing effect, which strengthens the visual feedback and improves the user experience.
  • numbers describing the quantity of components and attributes are used. It should be understood that such numbers used in the description of the embodiments use the modifiers "about”, “approximately” or “substantially” in some examples. grooming. Unless otherwise stated, “about”, “approximately” or “substantially” indicates that the stated figure allows for a variation of ⁇ 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that can vary depending upon the desired characteristics of individual embodiments. In some embodiments, numerical parameters should take into account the specified significant digits and adopt the general digit reservation method. Although the numerical ranges and parameters used in some embodiments of this specification to confirm the breadth of the range are approximations, in specific embodiments, such numerical values are set as precisely as practicable.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Alarm Systems (AREA)

Abstract

Un mode de réalisation de la présente invention concerne un procédé et un système de commande. Le procédé consiste : à acquérir des informations pertinentes collectées par un ou plusieurs dispositifs de collecte d'un dispositif intelligent sur la base d'informations de déclenchement du dispositif intelligent ; et à traiter les informations pertinentes et/ou les informations de déclenchement sur la base d'un algorithme prédéfini pour commander au dispositif intelligent d'effectuer une opération correspondante.
PCT/CN2022/104406 2021-07-08 2022-07-07 Système et procédé de commande WO2023280273A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202280048533.XA CN117730524A (zh) 2021-08-13 2022-07-07 一种控制方法和系统

Applications Claiming Priority (14)

Application Number Priority Date Filing Date Title
CN202121549850.9 2021-07-08
CN202121549850.9U CN215298315U (zh) 2021-07-08 2021-07-08 多媒体装置及智能安全设备
CN202110929241.4A CN115941882A (zh) 2021-08-13 2021-08-13 用于控制安防设备的方法和装置
CN202110929241.4 2021-08-13
CN202110928953.4A CN115706837A (zh) 2021-08-13 2021-08-13 安防信息处理方法、装置及相关设备
CN202110928953.4 2021-08-13
CN202111568028.1 2021-12-21
CN202111568028.1A CN113971782B (zh) 2021-12-21 2021-12-21 一种综合监控信息管理方法和系统
CN202111608219.6A CN113992859A (zh) 2021-12-27 2021-12-27 一种画质提升方法和装置
CN202111608219.6 2021-12-27
CN202210100036.1 2022-01-27
CN202210100036.1A CN114139021B (zh) 2022-01-27 2022-01-27 一种索引信息管理方法和系统
CN202210137781.3A CN114205565B (zh) 2022-02-15 2022-02-15 一种监控视频分发方法和系统
CN202210137781.3 2022-02-15

Publications (1)

Publication Number Publication Date
WO2023280273A1 true WO2023280273A1 (fr) 2023-01-12

Family

ID=84800334

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/104406 WO2023280273A1 (fr) 2021-07-08 2022-07-07 Système et procédé de commande

Country Status (1)

Country Link
WO (1) WO2023280273A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116667884A (zh) * 2023-06-08 2023-08-29 广东视安通智慧显控股份有限公司 一种非极化ip的室内综合两线控制系统
CN116680752A (zh) * 2023-05-23 2023-09-01 杭州水立科技有限公司 一种基于数据处理的水利工程安全监测方法及系统
CN118015737A (zh) * 2024-04-10 2024-05-10 山西丰鸿实业有限公司 基于物联网的智能门锁联合控制系统

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095847A (zh) * 2014-05-16 2015-11-25 北京天诚盛业科技有限公司 用于移动终端的虹膜识别方法和装置
CN109636763A (zh) * 2017-10-09 2019-04-16 泰邦泰平科技(北京)有限公司 一种智能复眼监控系统
CN109918993A (zh) * 2019-01-09 2019-06-21 杭州中威电子股份有限公司 一种基于人脸区域曝光的控制方法
CN110428522A (zh) * 2019-07-24 2019-11-08 青岛联合创智科技有限公司 一种智慧新城的智能安防系统
CN110599657A (zh) * 2019-09-30 2019-12-20 浙江树人学院(浙江树人大学) 一种基于图像识别技术的门禁监控控制系统及方法
CN111090616A (zh) * 2019-12-30 2020-05-01 北京华胜天成科技股份有限公司 一种文件管理方法、对应装置、设备及存储介质
CN111882711A (zh) * 2020-07-23 2020-11-03 海尔优家智能科技(北京)有限公司 一种门锁控制方法及系统,存储介质、电子装置
CN112911146A (zh) * 2021-01-27 2021-06-04 杭州寰宇微视科技有限公司 基于人脸的智能调光方法
CN113449592A (zh) * 2021-05-18 2021-09-28 浙江大华技术股份有限公司 押运任务检测方法、系统、电子装置和存储介质
CN215298315U (zh) * 2021-07-08 2021-12-24 云丁网络技术(北京)有限公司 多媒体装置及智能安全设备
CN113971782A (zh) * 2021-12-21 2022-01-25 云丁网络技术(北京)有限公司 一种综合监控信息管理方法和系统
CN113992859A (zh) * 2021-12-27 2022-01-28 云丁网络技术(北京)有限公司 一种画质提升方法和装置
CN114139021A (zh) * 2022-01-27 2022-03-04 云丁网络技术(北京)有限公司 一种索引信息管理方法和系统
CN114205565A (zh) * 2022-02-15 2022-03-18 云丁网络技术(北京)有限公司 一种监控视频分发方法和系统

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095847A (zh) * 2014-05-16 2015-11-25 北京天诚盛业科技有限公司 用于移动终端的虹膜识别方法和装置
CN109636763A (zh) * 2017-10-09 2019-04-16 泰邦泰平科技(北京)有限公司 一种智能复眼监控系统
CN109918993A (zh) * 2019-01-09 2019-06-21 杭州中威电子股份有限公司 一种基于人脸区域曝光的控制方法
CN110428522A (zh) * 2019-07-24 2019-11-08 青岛联合创智科技有限公司 一种智慧新城的智能安防系统
CN110599657A (zh) * 2019-09-30 2019-12-20 浙江树人学院(浙江树人大学) 一种基于图像识别技术的门禁监控控制系统及方法
CN111090616A (zh) * 2019-12-30 2020-05-01 北京华胜天成科技股份有限公司 一种文件管理方法、对应装置、设备及存储介质
CN111882711A (zh) * 2020-07-23 2020-11-03 海尔优家智能科技(北京)有限公司 一种门锁控制方法及系统,存储介质、电子装置
CN112911146A (zh) * 2021-01-27 2021-06-04 杭州寰宇微视科技有限公司 基于人脸的智能调光方法
CN113449592A (zh) * 2021-05-18 2021-09-28 浙江大华技术股份有限公司 押运任务检测方法、系统、电子装置和存储介质
CN215298315U (zh) * 2021-07-08 2021-12-24 云丁网络技术(北京)有限公司 多媒体装置及智能安全设备
CN113971782A (zh) * 2021-12-21 2022-01-25 云丁网络技术(北京)有限公司 一种综合监控信息管理方法和系统
CN113992859A (zh) * 2021-12-27 2022-01-28 云丁网络技术(北京)有限公司 一种画质提升方法和装置
CN114139021A (zh) * 2022-01-27 2022-03-04 云丁网络技术(北京)有限公司 一种索引信息管理方法和系统
CN114205565A (zh) * 2022-02-15 2022-03-18 云丁网络技术(北京)有限公司 一种监控视频分发方法和系统

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116680752A (zh) * 2023-05-23 2023-09-01 杭州水立科技有限公司 一种基于数据处理的水利工程安全监测方法及系统
CN116680752B (zh) * 2023-05-23 2024-03-19 杭州水立科技有限公司 一种基于数据处理的水利工程安全监测方法及系统
CN116667884A (zh) * 2023-06-08 2023-08-29 广东视安通智慧显控股份有限公司 一种非极化ip的室内综合两线控制系统
CN118015737A (zh) * 2024-04-10 2024-05-10 山西丰鸿实业有限公司 基于物联网的智能门锁联合控制系统

Similar Documents

Publication Publication Date Title
US11132881B2 (en) Electronic devices capable of communicating over multiple networks
WO2023280273A1 (fr) Système et procédé de commande
US10235822B2 (en) Automatic system access using facial recognition
US10083599B2 (en) Remote user interface and display for events for a monitored location
US10657749B2 (en) Automatic system access using facial recognition
AU2018312581B2 (en) Supervising property access with portable camera
US11854357B2 (en) Object tracking using disparate monitoring systems
US20210407266A1 (en) Remote security system and method
US10922547B1 (en) Leveraging audio/video recording and communication devices during an emergency situation
US20210248221A1 (en) Security control method and system
JP5363214B2 (ja) 警備システム及びセンサ端末
CN105264483A (zh) 用于管理场所安全的门户网站
CN117730524A (zh) 一种控制方法和系统
TWI712919B (zh) 智慧對講系統及其使用方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22837018

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202280048533.X

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE