CN114697386A - Information notification method, device, terminal and storage medium - Google Patents

Information notification method, device, terminal and storage medium Download PDF

Info

Publication number
CN114697386A
CN114697386A CN202210176100.4A CN202210176100A CN114697386A CN 114697386 A CN114697386 A CN 114697386A CN 202210176100 A CN202210176100 A CN 202210176100A CN 114697386 A CN114697386 A CN 114697386A
Authority
CN
China
Prior art keywords
face
person
stay
marked
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210176100.4A
Other languages
Chinese (zh)
Inventor
焦帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lumi United Technology Co Ltd
Original Assignee
Lumi United Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lumi United Technology Co Ltd filed Critical Lumi United Technology Co Ltd
Priority to CN202210176100.4A priority Critical patent/CN114697386A/en
Publication of CN114697386A publication Critical patent/CN114697386A/en
Priority to PCT/CN2023/087885 priority patent/WO2023160728A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the application provides an information notification method, an information notification device, a terminal and a storage medium, and relates to the technical field of intelligent door locks. Wherein, the method comprises the following steps: the method comprises the steps of receiving stay person information sent by a server, wherein the stay person information comprises face data and stay data corresponding to stay persons, the stay data is used for indicating the server to recognize that the stay persons stay at set positions, the stay persons are any one of marked persons and unmarked persons, and the marked persons refer to persons with marked faces; detecting a face display event triggered for face data display; the stay data is displayed in the information notification bar. The embodiment of the application solves the problem that in the related art, the operation is too complicated when a user knows the specific situation of a person staying in front of a door.

Description

Information notification method, device, terminal and storage medium
Technical Field
The application relates to the technical field of intelligent door locks, in particular to an information notification method, an information notification device, a terminal and a storage medium.
Background
Along with the rapid development of the intelligent door lock industry, more and more consumers all use the intelligent door lock, and part of intelligent door locks still have the face recognition function, through this face recognition function for the user can know the personnel's in the front of the door concrete condition in time, with the safety at home of fully guarantee.
In the related art, specifically, based on the face recognition function of the intelligent door lock, if a person stays in front of a door, the intelligent door lock records a video or takes several pictures for the person, so that a user can view the video or the pictures of the person in a client associated with the intelligent door lock, and thus, the specific situation of the person staying in front of the door can be known.
In the process, the user needs to start the client associated with the intelligent door lock to know the specific situation of the person staying in front of the door, so that the video or the photo of the person staying can be viewed, the operation is complicated, and the use experience of the user is easily influenced.
Therefore, how to simplify the operation to facilitate the user to know the specific situation of the person staying in front of the door in time still remains to be solved.
Disclosure of Invention
Embodiments of the present application provide an information notification method, an information notification apparatus, a terminal, and a storage medium, which can solve the problem in the related art that a user has complicated operations when knowing the specific situation of a person staying in front of a door. The technical scheme is as follows:
according to an aspect of an embodiment of the present application, an information notification method includes: the method comprises the steps of receiving stay person information sent by a server, wherein the stay person information comprises face data and stay data corresponding to stay persons, the stay data is used for indicating the server to recognize that the stay persons stay at set positions, the stay persons are any one of marked persons and unmarked persons, and the marked persons refer to persons with marked faces; and displaying the stay data in an information notification bar.
According to an aspect of an embodiment of the present application, an information notifying apparatus includes: the information receiving module is used for receiving stay person information sent by a server, wherein the stay person information comprises face data and stay data corresponding to stay persons, the stay data is used for indicating the server to recognize that the stay persons stay at a set position, the stay persons are any one of marked persons and unmarked persons, and the marked persons refer to persons with marked faces; and the information notification module is used for displaying the stay data in an information notification bar.
In one exemplary embodiment, the apparatus further comprises: the event detection module is used for detecting a face display event triggered for displaying the face data of the stay person; the data display module is used for responding to the face display event and displaying the face data in a face display page; and the data processing module is used for carrying out face marking processing on the stay personnel based on the face data.
In one exemplary embodiment, the apparatus further comprises: the first operation acquisition module is used for acquiring a first face display operation triggered by the stay data; and the first event generation module is used for responding to the first face display operation, acquiring the face data from the lingering person information and generating the face display event according to the face data.
In one exemplary embodiment, the apparatus further comprises: the data management page display module is used for displaying a face data management page, the face data management page is used for checking a face data item, and the face data item is used for indicating face data which can be displayed in the face display page; the second operation acquisition module is used for acquiring a second face display operation triggered by the face data item in the face data management page; and the second event generation module is used for responding to the second face display operation to determine the face data and generating the face display event according to the face data.
In an exemplary embodiment, the face display page further displays a face mark entry corresponding to the face data; the data processing module comprises: a marking operation acquisition unit, configured to acquire a face marking operation triggered for the face marking entry; the display page jumping unit is used for responding to the face marking operation and jumping from the face display page to a face marking page; the face image display unit is used for displaying the face image of the stay person on the face mark page; and the marking processing unit is used for executing the face marking processing for the linger in the face marking page so as to enable the face image of the linger to become a marked face, and the linger becomes a marked person with the marked face.
In one exemplary embodiment, the mark processing unit includes: the conditional operation acquisition subunit is used for acquiring the staff marking operation triggered in the face marking page if the server identifies that the stay staff does not belong to marked staff; and the relationship adding subunit is used for responding to the personnel marking operation and adding the association relationship between the lingering personnel and the facial image thereof in the facial marking page.
In one exemplary embodiment, the marking processing unit includes: a modification operation acquisition subunit, configured to acquire, if the recognition of the lingering person by the server is incorrect, a person modification operation triggered in the face tag page; and the mark association subunit is used for responding to the personnel modification operation and associating the face image of the lingering personnel with the correct marked personnel in the face mark page.
In one exemplary embodiment, the apparatus further comprises: the personnel management page module is used for displaying a personnel management page, and the personnel management page is used for checking marked personnel with marked faces; the checking operation acquisition module is used for acquiring the checking operation of the personnel triggered in the personnel management page; the management page jumping module is used for responding to the personnel checking operation and jumping from the personnel management page to a face checking page; and the mark display module is used for displaying a marked face related to the marked personnel and/or a first person identifier corresponding to the marked personnel in the face viewing page.
In one exemplary embodiment, the apparatus further comprises: the mark editing operation acquisition module is used for acquiring mark editing operation triggered in the face viewing page; and the mark adding module is used for responding to the mark editing operation, and adding a face image to the marked person in the face viewing page or deleting the related marked face.
In one exemplary embodiment, the apparatus further comprises: the identification editing operation acquisition module is used for acquiring identification editing operation triggered in the face viewing page; and the identification modification module is used for responding to the identification editing operation and modifying the first person identification corresponding to the marked person in the face checking page.
In one exemplary embodiment, the apparatus further comprises: the information sending module is used for sending marked personnel information to the server so that the server identifies the marked personnel on the sojourn personnel based on the marked personnel information, the marked personnel information comprises a first personnel identification and a face identification of at least one marked personnel, the first personnel identification is used for representing the marked personnel, and the face identification is used for representing the marked face of the marked personnel.
In one exemplary embodiment, the apparatus further comprises: the identification receiving module is used for receiving a second personnel identification sent by the server, and the second personnel identification is used for representing unmarked personnel which are identified by the server and belong to a set industry;
adding the second person identification to the tagged person information.
According to an aspect of an embodiment of the present application, a terminal includes: the system comprises at least one processor, at least one memory and at least one communication bus, wherein the memory is stored with computer programs, and the processor reads the computer programs in the memory through the communication bus; the computer program, when executed by a processor, implements the information notification method as described above.
According to an aspect of an embodiment of the present application, a storage medium has a computer program stored thereon, and the computer program, when executed by a processor, implements the information notification method as described above.
According to an aspect of an embodiment of the present application, a computer program product includes a computer program, the computer program is stored in a storage medium, a processor of a computer device reads the computer program from the storage medium, and the processor executes the computer program, so that the computer device realizes the information notification method as described above when executing the computer program.
The beneficial effect that technical scheme that this application provided brought is:
in the technical scheme, the stay person information sent by the server is received, the stay person information comprises stay data of stay persons, and the stay data is used for indicating the server to recognize that the stay persons stay at the set positions.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.
FIG. 1 is a schematic illustration of an implementation environment according to the present application;
FIG. 2 is a flow diagram illustrating a method of information notification according to an example embodiment;
FIG. 3 is a flow diagram illustrating a face labeling process in an information notification method according to an exemplary embodiment;
FIG. 4 is a schematic diagram illustrating the display of face data of a lingering person according to an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating the display of face data on a face display page in accordance with an illustrative embodiment;
FIG. 6 is a diagram illustrating a display of face data and corresponding face tag entries in a face display page in accordance with an illustrative embodiment;
FIG. 7 is a schematic diagram illustrating a face tagging page triggering a people tagging operation in accordance with an illustrative embodiment;
FIG. 8 is a schematic diagram illustrating a face tagging page triggering a person modification operation in accordance with an illustrative embodiment;
FIG. 9 is a schematic diagram illustrating a people management page jumping to a faceview page in accordance with an illustrative embodiment;
FIG. 10 is a schematic diagram illustrating a face view page in accordance with an illustrative embodiment;
FIG. 11 is a diagram illustrating an implementation of an information notification method in an application scenario;
fig. 12 is a block diagram illustrating a structure of an information notifying apparatus according to an exemplary embodiment;
FIG. 13 is a diagram illustrating a hardware configuration of a terminal in accordance with an exemplary embodiment;
fig. 14 is a block diagram illustrating a structure of an electronic device according to an example embodiment.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an implementation environment related to an information notification method. The implementation environment includes a user terminal 110, a router 120, a gateway 150, an intelligent device 130, and a service terminal 170.
Specifically, the user terminal 110, also referred to as a user terminal or a terminal, may be configured to deploy a client associated with the smart device 130, and may be an electronic device such as a smart phone, a tablet computer, a notebook computer, and the like, which is not limited herein.
The client is associated with the intelligent device 130, and may also be understood as that the user performs account registration for the intelligent device 130 in the client, where the client may be in the form of an application program or a web page, and accordingly, a display page provided by the client may be in the form of a program window or a web page, which is also not limited herein.
The smart device 130 is deployed in the gateway 150, and accesses the gateway 150 through a communication module (e.g., ZIGBEE, Wi-Fi, or bluetooth) configured in the smart device, so as to interact with the gateway 150. This smart machine 130 can be intelligent printer, intelligent fax machine, intelligent camera, intelligent air conditioner, intelligent door lock, intelligent lamp or the human body sensor who has configured communication module, door and window sensor, temperature and humidity sensor, water logging sensor, natural gas alarm, smoke alarm, wall switch, wall socket, wireless switch, wireless wall switch, magic cube controller, curtain motor etc. electronic equipment, does not specifically restrict here yet.
The user terminal 110 interacts with the gateway 150 and the intelligent device 130 disposed in the gateway 150, so that the user can control the intelligent device 130 disposed in the gateway 150 to perform operations by means of the user terminal 110. In an application scenario, the user side 110 establishes a wired or wireless communication connection with the gateway 150 through the router 130, so that the user side 110 and the gateway 150 are deployed in the same local area network, and further the user side 110 can realize interaction with the intelligent device 130 and the gateway 150 through a local area network path. In another application scenario, the user terminal 110 establishes a wired or wireless communication connection with the gateway 150 through the server 170, for example, the wired or wireless communication connection includes but is not limited to 2G/3G/4G/5G, Wi-Fi, so that the user terminal 110 and the gateway 150 are deployed in the same wide area network, and further the user terminal 110 can interact with the smart device 130 and the gateway 150 through a wide area network path.
The server 170 may be a server, or a server cluster formed by multiple servers, or a cloud computing center formed by multiple servers, so as to better provide background services to the mass clients 110 and the smart devices 130. For example, a server is an electronic device that provides a background service for a user, including but not limited to a face recognition service, a stay data push service, and the like.
In an application scenario, the intelligent device 130 is an intelligent door lock, the intelligent door lock is configured with a human body sensing device, and when the human body sensing device detects that a person stays in front of the door, the human body sensing device starts a camera configured with the intelligent door lock to record a video or take a plurality of pictures for the person, and sends the video or the pictures to the server 170.
Accordingly, the server 170 can receive the video or the photo sent by the intelligent door lock, perform face recognition on the video or the photo of the person, perform recognition on the person with the marked person, generate stay data for the stay person staying in front of the door, and return the stay data to the user terminal 110. The lingering person may be a marked person whose face marking process is performed by the user, such as a person known by the user, or an unmarked person whose face marking process is not performed by the user, such as a stranger.
In the user side 110, the stay data of the stay who stays in front of the door is received and displayed in the information notification bar.
Therefore, the user can directly know the specific situation of the person staying in front of the door in time through the staying data displayed in the information notification bar without starting a client related to the intelligent door lock, and the problem that the user is excessively complicated to operate when knowing the specific situation of the person staying in front of the door in the related technology is effectively solved.
Referring to fig. 2, an embodiment of the present application provides an information notification method, which is suitable for the user end 110 in the implementation environment shown in fig. 1, for example, the user end 110 may be an electronic device such as a smart phone, a tablet computer, a notebook computer, and the like.
In the following method embodiments, for convenience of description, the execution subject of each step is taken as a user side for explanation, but the method is not particularly limited thereto.
As shown in fig. 2, the method may include the steps of:
and step 310, receiving the stay person information sent by the server.
In one embodiment, the stay person information includes stay data for the stay person. In another embodiment, the lingering person includes face data and lingering data of the lingering person.
The face data refers to a video or a photograph of a person staying. In an application scene, when the human body sensing device configured on the intelligent door lock detects that a person stays in front of the door, the human body sensing device can start the camera configured on the intelligent door lock to record a video or take a plurality of pictures for the person, and the video or the pictures are sent to the server. Accordingly, the server can receive the video or the photo of the person and store the corresponding face data of the person for the face marking processing of the person. In other words, the face data may be a static image describing the staying person, or may be a dynamic image describing the staying person, which is not limited in this embodiment.
The stay data is used for indicating the server to recognize that the stay person stays at the set position. In one application scenario, after receiving a video or a photo of a person staying in a room, the server performs face recognition on the video or the photo, and performs recognition on the person staying in the room about the marked person. Here, the tagged person refers to a person having a tagged face, and may be considered to be a person whose face is tagged by the user at the client associated with the smart door lock. Then, for the stay person, a marked person who may be recognized by the server as having been subjected to the face labeling process by the user, for example, a person recognized by the user, and an unmarked person who may be recognized by the server as having not been subjected to the face labeling process by the user, for example, a stranger, are recognized by the server, and thereby stay data is generated. That is, the stay data directly reflects whether or not a person staying at a set position (e.g., in front of a door) is a person recognized by the user.
In one embodiment, the user terminal generates marked personnel information based on the marked personnel who perform the face marking processing, and sends the marked personnel information to the server. The marked personnel information comprises a first personnel identification and a face identification of at least one marked personnel, wherein the first personnel identification is used for representing the marked personnel, and the face identification is used for representing the marked face of the marked personnel. Correspondingly, the server can receive the marked personnel information and identify the stay personnel on the basis of the marked personnel information. In one embodiment, the server performs face recognition on face data corresponding to a lingering person to obtain a face identifier of the lingering person; and searching the marked personnel information for whether the face identification of the marked personnel matched with the face identification of the lingering personnel exists, if so, identifying the lingering personnel as the marked personnel, and otherwise, identifying the lingering personnel as the unmarked personnel. Wherein the unmarked person may also be uniquely represented by an identification to distinguish it from the marked person represented by the first person identification.
Further, in another embodiment, the marked personnel information further comprises at least one second personnel identifier, and the second personnel identifier is used for representing unmarked personnel belonging to the set industry and is fed back to the client through the server. Wherein, set industries include but are not limited to: industries such as express delivery, take-out, property, intermediary, security and the like. Specifically, for the server, if the stay person is identified as the unmarked person, the clothing of the stay person is detected, if the clothing of the stay person is detected to belong to the set industry, the stay person is identified as the unmarked person belonging to the set industry, and otherwise, if the clothing of the stay person is not detected to belong to the set industry, the stay person is identified as the stranger. The strangers can be uniquely represented by the third person identifier to be distinguished from the marked persons represented by the first person identifier and the unmarked persons belonging to the set industry represented by the second person identifier.
Then, after the server obtains the face data and/or the stay data of the stay person and sends the face data and/or the stay data to the user side, the user side can receive the face data and/or the stay data of the stay person.
The face data of the lingering person may include a video or a photo of the lingering person captured by the camera, and may also include a face image of the lingering person obtained by the server in the face recognition process, which is not limited herein.
In step 320, the stay data is displayed in the information notification bar.
Wherein, the information notification bar is independent of the client side associated with the intelligent door lock.
Therefore, the user can timely know the specific situation of the person staying in front of the door, for example, whether the person staying is a stranger, through the stay data displayed in the information notification bar without starting a client related to the intelligent door lock, and the problem that the operation is too complicated when the user knows the specific situation of the person staying in front of the door in the related art is effectively solved.
Referring to fig. 3, an embodiment of the present application provides a face labeling process in an information notification method, where the method may further include the following steps:
in step 330, a face display event triggered for displaying face data of a lingering person is detected.
The face display event is an event triggering the display of a face. Specifically, the face display event may be generated based on the display of the stay data, and may also be generated based on the viewing of the face data. Accordingly, the face display event may be a click operation on the stay data or a click operation on the face data that the user desires to view.
In one embodiment, the generation process of the face display event may include the following steps: acquiring a first face display operation triggered by the stay data; and responding to the first face display operation, acquiring face data from the stay person information, and generating a face display event according to the face data. Then the user side can detect the face display event triggered for the stay data. Fig. 4(a) is a schematic diagram illustrating that the stay data is displayed on the information push page in an embodiment, and as shown in fig. 4(a), the user enters the information push page 301 and clicks the displayed stay data 302, thereby triggering the display of the face data of the stay person.
In one embodiment, the generation process of the face display event may include the following steps: displaying a face data management page, wherein the face data management page is used for checking face data items, and the face data items are used for indicating face data which can be displayed in the face display page; acquiring a second face display operation triggered by the face data item in the face data management page; and determining face data in response to the second face display operation, and generating a face display event according to the face data. Then, the user terminal can detect a click operation for the face data that the user desires to view. Fig. 4(b) is a schematic diagram of a face data management page in an embodiment, as shown in fig. 4(b), a user enters a face data management page 303, three face data entries are displayed in the face data management page 303, each face data entry is used for indicating face data that can be viewed in a face display page, for example, a face data entry 304 is used for indicating that face data 1 corresponding to a first person identifier 1 can be viewed in the face display page, and when the user clicks the face data entry 304, the face data entry 304 indicates that face data that the user desires to view is face data 1 corresponding to the first person identifier 1, thereby triggering display of face data of a stay person.
It should be noted that, according to different input components (for example, a touch layer, a mouse, a keyboard, and the like overlaid on a display screen) configured at the user end, specific behaviors of the face display event may also be differentiated. For example, with a smart phone input through a touch layer, the face display event may be a gesture operation such as a click and a slide, while for a notebook computer configured with a mouse, the face display event may be a mechanical operation such as a drag, a click, a double click, and the like, which is not specifically limited in this embodiment.
Step 350, responding to the face display event, and displaying face data in the face display page.
After the face display event is detected, the display of the face data corresponding to the stay person is triggered. In one embodiment, a jump is made from the information push page to the face display page to display the face data in the face display page. In one embodiment, the user jumps from the face data management page to the face display page, and then the face data is displayed in the face display page.
Fig. 5 is a schematic diagram of a face display page in an embodiment, and as shown in fig. 5, when a user enters the face display page 305, a video 306 recorded by a camera arranged in the smart door lock to a person staying in front of the door can be viewed in the face display page 305.
And step 370, performing face marking processing on the stay person based on the face data.
After the face data is displayed on the face display page, face marking processing can be triggered to be carried out on the stay person. In one embodiment, when the display time of the face data in the face display page reaches a time threshold, the face display page automatically jumps to a face marking page from the face display page, so that face marking processing is executed for a lingering person in the face marking page. In one embodiment, a face mark entry corresponding to face data is also displayed in the face display page, the face display page jumps to a face mark page from the face display page by acquiring and responding to a face mark operation triggered by the face mark entry, a face image of a lingering person is displayed in the face mark page, and face mark processing is further performed for the lingering person in the face mark page, so that the face image of the lingering person becomes a marked face, and the lingering person becomes a marked person with a marked face. The face image of the stay person may be obtained by the server through face recognition, or may be obtained by the client through a face, which is not limited in this embodiment.
With continued reference to fig. 5, in fig. 5, a face mark entry corresponding to the face data 306 is also displayed in the face display page 305, and then, when the user enters the face display page 305, the user can click the "face mark" icon 307, thereby triggering the face mark processing for the stay person. The "face mark" icon 307 is a face mark entry corresponding to the face data 306. Of course, the face marker entry may always be displayed in the face display page, as shown in FIG. 5, in other embodiments the face marker entry may only be used when the linger is identified as a stranger, as shown in FIG. 6, and the "face marker" icon 308 in the face display page 305 may not be clicked when the linger king is identified as a non-stranger. It should be noted that the icon is substantially a triggerable control in the client, that is, a control that can be triggered to enable the client to interact with the user, and a display form of the triggerable control in the client includes, but is not limited to, a button, a switch, a slider, an input box, a drop-down box, and the like, which is not limited in this embodiment.
The face marking processing is to add the face image of the lingering person as a marked face so as to establish an association relationship between the lingering person and the face image thereof. The marked face may be a face image of a lingering person obtained by face recognition of a plurality of frames of static images contained in the video, may also be a face image of the lingering person obtained by face recognition of any one of the pictures, and may also be a face image of the lingering person displayed in a face display page.
In one embodiment, the face marking process may include the following steps: if the stay person identified by the server does not belong to the marked person, acquiring a person marking operation triggered in the face marking page; and responding to the personnel marking operation, and adding the association relationship between the stay personnel and the face image thereof in the face marking page. Fig. 7 shows a schematic diagram of a person tagging operation triggered by a face tagging page in an embodiment, as shown in fig. 7, a user enters the face tagging page 309, and a face image 310 of a person staying in the face tagging page 309 and a tagged person list (for example, the list may include a tagged face 312 and a first person identifier 313 of a tagged person) are displayed in the face tagging page 309, it should be understood that the tagged person list reflects an association relationship between the tagged person and a tagged face existing in the tagged person, and of course, in other embodiments, the tagged person list may also include a first person identifier of the tagged person and a face identifier for representing the tagged face, which is not limited in this embodiment. If the server identifies that the king of the lingering person does not belong to the marked person, the face image of the king of the lingering person does not exist in the marked person list, at this time, the user can input the name of the lingering person, namely the king, in the input box 311 to serve as the first person identifier of the marked person, and after the user clicks the confirmation icon 314, the association relationship can be established between the face image of the king of the lingering person and the name of the person, namely the king. Thus, the server can recognize the commander queen as the marked person "queen" in the subsequent recognition of the marked person for the commander queen.
In one embodiment, the face marking process may include the following steps: if the server identifies the stay personnel incorrectly, acquiring personnel modification operation triggered in the face mark page; in response to the person modification operation, the face image of the lingering person is associated with the correct tagged person in the face tagging page. Fig. 8 shows a schematic diagram of the person modification operation triggered by the face mark page in an embodiment, as shown in fig. 8, the user enters the face mark page 309, in which the face image 310 of the king of the linger and the marked person list (for example, the list may contain the marked faces 315 and the first person identifications 316 of the marked persons) are displayed, if the server recognizes the king of the linger wrongly, for example, recognizes the king of the linger as the wrong marked person "joker", or recognizes the king of the linger as an unmarked person belonging to the set industry but the king of the linger is actually the marked person "joker", at which time, the user may click on the marked face 315 or the first person identification 316 of the correct marked person "joker" in the marked person list and click on the "confirm" icon 314, thereby associating the face image of the king of the linger with the correct marked person "joker", thus, the server will recognize the soking person king as the correct marked person "king" in the subsequent identification of the marked person for the soking person king.
It is worth mentioning that the server has an error in identifying the lingering person, which may be caused by a problem of a camera shooting angle or light, and the like, so that the face image of the lingering person obtained by the server through face identification is different from the marked face of the marked person, and the server cannot correctly identify the lingering person as the marked person.
Through the process, the face marking processing of the stay personnel is realized, people unknown to the user can be marked, such as strangers, and people known to the user can also be marked, so that the user can accurately know the specific conditions of the stay personnel in front of the door through stay data displayed in the information notification bar, for example, whether the stay personnel is a queen or not is avoided, the user is prevented from starting a client associated with the intelligent door lock, and the problem that the operation is too complicated when the user knows the specific conditions of the stay personnel in front of the door in the related technology is effectively solved.
The embodiment of the present application provides a possible implementation manner, and the information notification method may further include the following steps:
and checking the marked personnel with the marked faces in a personnel management page.
Specifically, a personnel management page is displayed; acquiring a staff viewing operation triggered in a staff management page; jumping from the personnel management page to the face viewing page in response to personnel viewing operation; and displaying the marked faces related to the marked persons and/or the first person identifications corresponding to the marked persons in the face viewing page. Fig. 9 shows a schematic diagram of a staff management page jumping to a face view page in an embodiment, as shown in fig. 9, a user enters a staff management page 401, a number of marked persons (e.g., marked person 1, marked person 2, and marked person 3) are displayed in the staff management page 401, and if the user desires to view marked person 1, the user can click on a marked face 402 or a first person identifier 403 of marked person 1 in the staff management page 401, at this time, the user can enter a face view page 404 for viewing marked person 1.
In short, the people management page is used for viewing all marked people with face marks; the face view page is used to view a designated one of the tagged persons for which a face tag exists.
Further, a possible implementation manner is provided in the embodiment of the present application, and the information notification method may further include the following steps:
and carrying out personnel modification processing on the marked personnel in the face viewing page.
Specifically, the person modification processing may be adding a face image to the labeled person, or deleting an associated labeled face, or modifying a first person identifier corresponding to the labeled person.
In one embodiment, the personnel modification process may include the steps of: acquiring a mark editing operation triggered in a face viewing page; in response to the marking edit operation, adding a face image to the marked person in the face view page, or deleting the associated marked face. FIG. 10 is a schematic diagram illustrating a face view page in one embodiment, and as shown in FIG. 10, a user may enter the face view page 404 and click on the "Add" icon 406 to add a new face image 408 for the tagged person 1.
In one embodiment, the personnel modification process may include the steps of: acquiring an identifier editing operation triggered in a face viewing page; and in response to the identification editing operation, modifying the first person identification corresponding to the marked person in the face viewing page. With continued reference to FIG. 10, the user enters the face view page 404 and may click on the "edit" icon 405 to modify the corresponding first person identifier 407 for the tagged person 1.
Under the cooperation of the embodiment, the management of the face marks is realized, so that a user can not only add the face image or modify the marked person by displaying the face data in the face display page, but also add the face image or delete the marked face by displaying the marked person in the face management page, and the access way of face mark processing is enriched, thereby effectively expanding the face mark processing scene and being beneficial to improving the use experience of the user.
Fig. 11 is a schematic diagram of a specific implementation of an information notification method in an application scenario. In the application scenario, the intelligent device is an intelligent door lock, the server can push stay data to the user terminal through the information notification method, and the user terminal is an intelligent mobile phone, so that the user can know that a stay person stays in front of a door in time by means of a notification bar function provided by the intelligent mobile phone. The method specifically comprises the following steps:
s801: the intelligent door lock is provided with a human body sensing device for detecting whether a person stays in front of the door.
S802: when the human body sensing device detects that a person A stays in front of the door, the intelligent door lock starts a camera configured by the intelligent door lock so as to record a video or take a plurality of pictures for the person A.
S803: the intelligent door lock sends the video or the photo of the person A to the server.
S804: after receiving the video or the photo of the person A, the server performs face recognition on the video or the photo of the person A to obtain the face identification of the person A. Specifically, the method comprises the following steps: carrying out face detection on the video or the photo of the person A to obtain a face image containing the face of the person A; extracting facial image features from the facial image of the person A; and carrying out face prediction on the person A according to the face image characteristics to obtain the face identification of the person A.
S805: and identifying the marked personnel for the personnel A based on the face identification of the personnel A and the marked personnel information sent by the user terminal, and determining whether the personnel A is the marked personnel. Specifically, the method comprises the following steps: based on the first person identification and the face identification of the marked person in the marked person information, firstly, whether the face identification of the marked person matched with the face identification of the person A exists is searched; if the person A exists, the person A is a marked person, and a first person identification of the marked person is obtained; otherwise, the person A is an unmarked person, so that the third person identification of the unmarked person is determined.
If the person A is identified as a marked person, the process proceeds to step S808.
If the person A is identified as an unmarked person, the process proceeds to step S806, and whether the person A is an unmarked person of the set industry is further identified.
S806: and detecting the clothes of the person A to determine whether the person A belongs to an unmarked person in a set industry. Specifically, the method comprises the following steps: carrying out clothing detection on the video or the photo of the person A to obtain a clothing image containing clothing worn by the person A; extracting clothing image features from clothing images of the person A; and (4) performing clothing prediction on the person A according to the clothing image characteristics to obtain the industry identification of the person A, and further determining whether the person A belongs to the unmarked person in the set industry.
S807: and if the person A belongs to the unmarked person in the set industry, generating a second person identifier of the person A, sending the second person identifier to the user terminal, and storing the second person identifier to the marked person information.
S808: if person A is identified as a tagged person, the stay data is generated based on the first person identification of person A. For example, the first person of person A is identified as "person A" and the stay data is "person A stays in front of the door".
If person A is identified as an unmarked person of the set industry, the stay data is generated based on the second person identification of person A. For example, the second person of person A is identified as the "courier" and the stay data is the "courier stays at the door".
If person A is identified as an unlabeled person, the stay data is generated based on a third person identification for person A. Specifically, the third person of the person a is identified as "stranger", and the stay data is "stranger stays in front of the door".
S809: and sending the face data and the stay data corresponding to the person A to the client. The face data may be a video or a photograph of the person a, or may be a face image of the person a.
S810: after the user terminal receives the stay data sent by the server, the user can check the stay data in an information notification column in the user terminal.
For example, if the stay data is "courier/stranger stays in front of door", which means that the server recognizes that the person a does not belong to the marked person, the user may perform the face marking process on the person a, i.e., proceed to steps S811 to S815.
S811: the user looks up the video or the photo or the face image of the person A in the client, and if the person A is expected to be subjected to face marking processing, the user enters a face marking page to perform face marking processing on the person A. The human face marking processing comprises personnel marking and personnel modification.
S812: assuming that the stay data is "courier/stranger stay at door", indicating that person a is an unmarked person, person marking may be performed for person a.
The personnel token includes: acquiring personnel marking operation triggered in a face marking page; and responding to the personnel marking operation, and adding the association relationship between the personnel A and the face image thereof in the face marking page. Based on this, the person a is a marked person "person a", and the face image of the person a is a marked face of the marked person.
S814: assuming the stay data is "courier/stranger stay in front of door", but person a is essentially labeled person "person a", or assuming the stay data is "person B stays in front of door", indicating that the server identification is incorrect, person modification may be performed for person a.
The personnel modification comprises: acquiring personnel modification operation triggered in a face mark page; in response to the person modification operation, the face image of person a is associated with the correct tagged person in the face tagging page.
S813/S815: tagged personnel information is sent to the server, the tagged personnel information including a first person identification of person a, "person a". Then, after the server receives the marked person information, the subsequent person a can be identified as the correct marked person "person a", and accordingly the resulting stay data is "person a stays in front of the door".
In the application scenario, a user obtains stay data which is pushed by the server and used for indicating a stay person to stay at a set position through a notification bar function provided by the smart phone, the stay data can be clear to the stay situation of the person at home without starting a client related to the smart door lock, the application scenario is simple and convenient, and the problem that the operation is too complicated when the user knows the specific situation of the stay person at home in the related art is effectively solved.
In addition, through the face marking processing, the stay data pushed by the server and used for indicating stay of a stay person at a set position can embody people unknown to the user, such as strangers, and can also embody people known to the user, so that the accuracy of stay data pushing is greatly improved, and the user experience is favorably improved.
The following are embodiments of the apparatus of the present application that can be used to perform the information notification method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to method embodiments of the information notification method referred to in the present application.
Referring to fig. 12, an embodiment of the present application provides an information notification apparatus 900, which includes but is not limited to: an information receiving module 910 and an information notifying module 930.
The information receiving module 910 is configured to receive stay person information sent by a server, where the stay person information includes stay data of a stay person, the stay data is used to indicate that the server recognizes that the stay person stays at a set position, the stay person is any one of a marked person and an unmarked person, and the marked person is a person with a marked face.
And an information notification module 930 configured to display the stay data in the information notification bar.
It should be noted that, when the information notification apparatus provided in the foregoing embodiment processes the face data, only the division of the above functional modules is taken as an example, and in practical applications, the above functions may be distributed to different functional modules according to needs, that is, the internal structure of the information notification apparatus is divided into different functional modules to complete all or part of the above described functions.
In addition, the information notification apparatus provided in the above embodiments and the embodiments of the information notification method belong to the same concept, and the specific manner in which each module performs operations has been described in detail in the method embodiments, and is not described again here.
Referring to fig. 13, fig. 13 is a hardware configuration diagram illustrating a terminal according to an exemplary embodiment. The terminal is suitable for the user terminal 110 in the implementation environment shown in fig. 1.
It should be noted that the terminal is only an example adapted to the application and should not be considered as providing any limitation to the scope of use of the application. Nor should the terminal be interpreted as having a need to rely on or have to have one or more components of the exemplary terminal 1100 shown in fig. 13.
As shown in fig. 13, the terminal 1100 includes a memory 101, a memory controller 103, one or more (only one shown in fig. 13) processors 105, a peripheral interface 107, a radio frequency module 109, a positioning module 111, a camera module 113, an audio module 115, a touch screen 117, and a key module 119. These components communicate with each other via one or more communication buses/signal lines 121.
The memory 101 may be used to store computer programs and modules, such as the computer programs and modules corresponding to the information notification method and apparatus in the exemplary embodiment of the present application, and the processor 105 executes various functions and data processing by running the computer programs stored in the memory 101, so as to complete the information notification method.
The memory 101, which serves as a carrier for resource storage, may be random access memory, e.g., high speed random access memory, non-volatile memory, such as one or more magnetic storage devices, flash memory, or other solid state memory. The storage means may be a transient storage or a permanent storage.
The peripheral interface 107 may include at least one wired or wireless network interface, at least one serial-to-parallel conversion interface, at least one input/output interface, at least one USB interface, and the like, for coupling various external input/output devices to the memory 101 and the processor 105, so as to realize communication with various external input/output devices.
The rf module 109 is configured to receive and transmit electromagnetic waves, and achieve interconversion between the electromagnetic waves and electrical signals, so as to communicate with other devices through a communication network. Communication networks include cellular telephone networks, wireless local area networks, or metropolitan area networks, which may use various communication standards, protocols, and technologies.
The positioning module 111 is used to obtain the current geographic location of the terminal 1100. Examples of the positioning module 111 include, but are not limited to, a global positioning satellite system (GPS), a wireless local area network-based positioning technology, or a mobile communication network-based positioning technology.
The camera module 113 is attached to a camera and is used for taking pictures or videos. The shot pictures or videos can be stored in the memory 101 and also can be sent to an upper computer through the radio frequency module 109.
Audio module 115 provides an audio interface to a user, which may include one or more microphone interfaces, one or more speaker interfaces, and one or more headphone interfaces. And performing audio data interaction with other equipment through the audio interface. The audio data may be stored in the memory 101 and may also be transmitted through the radio frequency module 109.
The touch screen 117 provides an input-output interface between the terminal 1100 and a user. Specifically, the user may perform an input operation, such as a gesture operation of clicking, touching, sliding, or the like, through the touch screen 117, so that the terminal 1100 responds to the input operation. The terminal 1100 displays and outputs the output content formed by any one or combination of text, pictures or videos to the user through the touch screen 117.
Key module 119 includes at least one key for providing an interface for a user to input to terminal 1100, and the user can cause terminal 1100 to perform different functions by pressing different keys. For example, the sound adjustment keys may allow a user to effect an adjustment of the volume of sound played by terminal 1100.
It is to be understood that the configuration shown in fig. 13 is merely exemplary, and terminal 1100 may include more or fewer components than shown in fig. 13, or different components than shown in fig. 13. The components shown in fig. 13 may be implemented in hardware, software, or a combination thereof.
Referring to fig. 14, in an embodiment of the present application, an electronic device 4000 is provided, where the electronic device 400 may include: smart phones, tablets, laptops, and the like.
In fig. 14, the electronic device 4000 includes at least one processor 4001, at least one communication bus 4002, and at least one memory 4003.
Processor 4001 is coupled to memory 4003, such as via communication bus 4002. Optionally, the electronic device 4000 may further include a transceiver 4004, and the transceiver 4004 may be used for data interaction between the electronic device and other electronic devices, such as transmission of data and/or reception of data. In addition, the transceiver 4004 is not limited to one in practical applications, and the structure of the electronic device 4000 is not limited to the embodiment of the present application.
The Processor 4001 may be a CPU (Central Processing Unit), a general-purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 4001 may also be a combination that performs a computational function, including, for example, a combination of one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
Communication bus 4002 may include a path that carries information between the aforementioned components. The communication bus 4002 may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The communication bus 4002 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 14, but this is not intended to represent only one bus or type of bus.
The Memory 4003 may be a ROM (Read Only Memory) or other types of static storage devices that can store static information and instructions, a RAM (Random Access Memory) or other types of dynamic storage devices that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read Only Memory) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic Disc storage medium or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these.
The memory 4003 has a computer program stored thereon, and the processor 4001 reads the computer program stored in the memory 4003 through the communication bus 4002.
The computer program realizes the information notification method in the above embodiments when executed by the processor 4001.
In addition, in the embodiments of the present application, a storage medium is provided, and a computer program is stored on the storage medium, and when being executed by a processor, the computer program realizes the information notification method in the embodiments described above.
A computer program product is provided in an embodiment of the present application, the computer program product comprising a computer program stored in a storage medium. The processor of the computer device reads the computer program from the storage medium, and the processor executes the computer program, so that the computer device executes the information notification method in the above-described embodiments.
Compared with the related art, the user does not need to start a client associated with the intelligent door lock, but can timely know the specific situation of the stay person in front of the door based on stay data pushed by the server and stayed at a set position, wherein the stay person can be a marked person subjected to face marking processing by the user, such as a person known by the user, or an unmarked person not subjected to face marking processing by the user, such as a stranger, so that the problem that the operation is too complicated when the user knows the specific situation of the stay person in front of the door in the related art is effectively solved.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

Claims (15)

1. An information notification method, characterized in that the method comprises:
the method comprises the steps of receiving stay person information sent by a server, wherein the stay person information comprises stay data of stay persons, the stay data is used for indicating the server to recognize that the stay persons stay at a set position, the stay persons are any one of marked persons and unmarked persons, and the marked persons refer to persons with marked faces;
and displaying the stay data in an information notification bar.
2. The method of claim 1, wherein the method further comprises:
detecting a face display event triggered for displaying face data of the stay;
responding to the face display event, and displaying the face data in a face display page;
and carrying out face marking processing on the lingering person based on the face data.
3. The method of claim 2, wherein prior to detecting the face display event triggered to display the face data of the lingering person, the method further comprises:
acquiring a first face display operation triggered by the stay data;
and responding to the first face display operation, acquiring the face data from the stay person information, and generating the face display event according to the face data.
4. The method of claim 2, wherein prior to detecting the face display event triggered to display the face data of the lingering person, the method further comprises:
displaying a face data management page, wherein the face data management page is used for checking a face data item, and the face data item is used for indicating face data which can be displayed in the face display page;
acquiring a second face display operation triggered by the face data item in the face data management page;
and responding to the second face display operation to determine the face data, and generating the face display event according to the face data.
5. The method of claim 2, wherein the face display page further displays a face mark entry corresponding to the face data;
the processing of face labeling of the stay person based on the face data includes:
acquiring a face marking operation triggered by the face marking entrance;
jumping from the face display page to a face tagging page in response to the face tagging operation;
displaying the face image of the stay person on the face mark page;
and executing the face marking processing for the linger in the face marking page so that the face image of the linger becomes a marked face, and the linger becomes a marked person with the marked face.
6. The method of claim 5, wherein the face labeling process comprises:
if the server identifies that the stay person does not belong to the marked person, acquiring a person marking operation triggered in the face marking page;
and responding to the personnel marking operation, and adding the association relationship between the lingering personnel and the facial image thereof in the facial marking page.
7. The method of claim 5, wherein the face labeling process comprises:
if the server identifies the lingering person wrongly, acquiring person modification operation triggered in the face mark page;
in response to the person modification operation, associating the facial image of the lingering person with the correct tagged person in the face tagging page.
8. The method of any of claims 1 to 7, further comprising:
displaying a personnel management page, wherein the personnel management page is used for checking marked personnel with marked faces;
acquiring personnel checking operation triggered in the personnel management page;
jumping from the staff management page to a face viewing page in response to the staff viewing operation;
and displaying the marked face related to the marked personnel and/or the first person identification corresponding to the marked personnel in the face viewing page.
9. The method of claim 8, wherein the method further comprises:
acquiring a mark editing operation triggered in the face viewing page;
and responding to the mark editing operation, adding a face image to the marked person in the face viewing page, or deleting the associated marked face.
10. The method of claim 8, wherein the method further comprises:
acquiring an identifier editing operation triggered in the face viewing page;
and responding to the identification editing operation, and modifying the first person identification corresponding to the marked person in the face viewing page.
11. The method of any of claims 1 to 7, further comprising:
sending tagged personnel information to the server to enable the server to identify the linger about tagged personnel based on the tagged personnel information, wherein the tagged personnel information comprises a first person identifier and a face identifier of at least one tagged person, the first person identifier is used for representing the tagged personnel, and the face identifier is used for representing the tagged face of the tagged personnel.
12. The method of claim 11, wherein the method further comprises:
receiving a second personnel identifier sent by the server, wherein the second personnel identifier is used for representing unmarked personnel which are identified by the server and belong to a set industry;
adding the second person identification to the tagged person information.
13. An information notifying apparatus, characterized in that the apparatus comprises:
the information receiving module is used for receiving stay person information sent by the server, wherein the stay person information comprises face data and stay data corresponding to stay persons, the stay data is used for indicating the server to recognize that the stay persons stay at set positions, the stay persons are any one of marked persons and unmarked persons, and the marked persons refer to persons with marked faces;
and the information notification module is used for displaying the stay data in an information notification bar.
14. A terminal, comprising: at least one processor, at least one memory, and at least one communication bus, wherein,
the memory has a computer program stored thereon, and the processor reads the computer program in the memory through the communication bus;
the computer program, when executed by the processor, implements the information notification method of any one of claims 1 to 12.
15. A storage medium on which a computer program is stored, the computer program realizing the information notification method according to any one of claims 1 to 12 when executed by a processor.
CN202210176100.4A 2022-02-24 2022-02-24 Information notification method, device, terminal and storage medium Pending CN114697386A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210176100.4A CN114697386A (en) 2022-02-24 2022-02-24 Information notification method, device, terminal and storage medium
PCT/CN2023/087885 WO2023160728A1 (en) 2022-02-24 2023-04-12 Information notification method and apparatus, and terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210176100.4A CN114697386A (en) 2022-02-24 2022-02-24 Information notification method, device, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN114697386A true CN114697386A (en) 2022-07-01

Family

ID=82137766

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210176100.4A Pending CN114697386A (en) 2022-02-24 2022-02-24 Information notification method, device, terminal and storage medium

Country Status (2)

Country Link
CN (1) CN114697386A (en)
WO (1) WO2023160728A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023160728A1 (en) * 2022-02-24 2023-08-31 深圳绿米联创科技有限公司 Information notification method and apparatus, and terminal and storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506755A (en) * 2017-09-26 2017-12-22 云丁网络技术(北京)有限公司 Monitoring video recognition methods and device
CN107563352A (en) * 2017-09-26 2018-01-09 云丁网络技术(北京)有限公司 A kind of method and device of the outer visitor's identity of recognitiion gate
CN108198287A (en) * 2017-12-21 2018-06-22 广东汇泰龙科技有限公司 A kind of doorbell based reminding method and system based on cloud lock
CN108933929A (en) * 2018-07-16 2018-12-04 北京奇虎科技有限公司 A kind of video monitoring method and security protection detection equipment
CN109361642A (en) * 2017-12-29 2019-02-19 广州Tcl智能家居科技有限公司 A kind of method and system that remote authorization is unlocked
CN109871785A (en) * 2019-01-29 2019-06-11 上海乐愚智能科技有限公司 Visiting personnel recognition methods, device, intelligent peephole, server and storage medium
CN110473372A (en) * 2019-08-16 2019-11-19 深圳海翼智新科技有限公司 Abnormal notification method, device and system in intelligent security guard
CN111126104A (en) * 2018-10-31 2020-05-08 云丁网络技术(北京)有限公司 Method and device for marking user identity attribute of data
CN111243143A (en) * 2020-03-25 2020-06-05 广东汇泰龙科技股份有限公司 Method and system for identifying and warning strangers outside door
CN112084812A (en) * 2019-06-12 2020-12-15 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN112801586A (en) * 2021-01-29 2021-05-14 青岛海信智慧生活科技股份有限公司 Logistics tail distribution method and device and computing equipment
CN112991585A (en) * 2021-02-08 2021-06-18 亚萨合莱(广州)智能科技有限公司 Personnel entering and exiting management method and computer readable storage medium
CN113055652A (en) * 2021-03-22 2021-06-29 广东好太太智能家居有限公司 Abnormal condition management method based on video lock, system and device
CN113822899A (en) * 2021-06-04 2021-12-21 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN114022896A (en) * 2021-09-28 2022-02-08 深圳绿米联创科技有限公司 Target detection method and device, electronic equipment and readable storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10599950B2 (en) * 2017-05-30 2020-03-24 Google Llc Systems and methods for person recognition data management
CN114697386A (en) * 2022-02-24 2022-07-01 深圳绿米联创科技有限公司 Information notification method, device, terminal and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107563352A (en) * 2017-09-26 2018-01-09 云丁网络技术(北京)有限公司 A kind of method and device of the outer visitor's identity of recognitiion gate
CN107506755A (en) * 2017-09-26 2017-12-22 云丁网络技术(北京)有限公司 Monitoring video recognition methods and device
CN108198287A (en) * 2017-12-21 2018-06-22 广东汇泰龙科技有限公司 A kind of doorbell based reminding method and system based on cloud lock
CN109361642A (en) * 2017-12-29 2019-02-19 广州Tcl智能家居科技有限公司 A kind of method and system that remote authorization is unlocked
CN108933929A (en) * 2018-07-16 2018-12-04 北京奇虎科技有限公司 A kind of video monitoring method and security protection detection equipment
CN111126104A (en) * 2018-10-31 2020-05-08 云丁网络技术(北京)有限公司 Method and device for marking user identity attribute of data
CN109871785A (en) * 2019-01-29 2019-06-11 上海乐愚智能科技有限公司 Visiting personnel recognition methods, device, intelligent peephole, server and storage medium
CN112084812A (en) * 2019-06-12 2020-12-15 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN110473372A (en) * 2019-08-16 2019-11-19 深圳海翼智新科技有限公司 Abnormal notification method, device and system in intelligent security guard
CN111243143A (en) * 2020-03-25 2020-06-05 广东汇泰龙科技股份有限公司 Method and system for identifying and warning strangers outside door
CN112801586A (en) * 2021-01-29 2021-05-14 青岛海信智慧生活科技股份有限公司 Logistics tail distribution method and device and computing equipment
CN112991585A (en) * 2021-02-08 2021-06-18 亚萨合莱(广州)智能科技有限公司 Personnel entering and exiting management method and computer readable storage medium
CN113055652A (en) * 2021-03-22 2021-06-29 广东好太太智能家居有限公司 Abnormal condition management method based on video lock, system and device
CN113822899A (en) * 2021-06-04 2021-12-21 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN114022896A (en) * 2021-09-28 2022-02-08 深圳绿米联创科技有限公司 Target detection method and device, electronic equipment and readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023160728A1 (en) * 2022-02-24 2023-08-31 深圳绿米联创科技有限公司 Information notification method and apparatus, and terminal and storage medium

Also Published As

Publication number Publication date
WO2023160728A1 (en) 2023-08-31

Similar Documents

Publication Publication Date Title
CN107506755B (en) Monitoring video identification method and device
CN104838642B (en) With founder's identity or the method and apparatus of scene markers media
WO2017211206A1 (en) Video marking method and device, and video monitoring method and system
US20120086792A1 (en) Image identification and sharing on mobile devices
CN108540755B (en) Identity recognition method and device
JP2018530804A5 (en) Multi-sensor event correlation system
KR20160044470A (en) Method, server and system for setting background image
CN110677682B (en) Live broadcast detection and data processing method, device, system and storage medium
KR20130102549A (en) Automatic media sharing via shutter click
WO2022257647A1 (en) Camera detection method and apparatus, storage medium, and electronic device
CN109743532B (en) Doorbell control method, electronic equipment, doorbell system and storage medium
CN105426485A (en) Image combination method and device, intelligent terminal and server
WO2023160728A1 (en) Information notification method and apparatus, and terminal and storage medium
CN104751086A (en) Terminal anti-theft method
CN112101216A (en) Face recognition method, device, equipment and storage medium
KR100783191B1 (en) Method for providing image and picture-taking device using the same
KR20170083256A (en) Apparatus and method for providing surveillance image based on depth image
US20210397823A1 (en) Computerized system and method for adaptive stranger detection
US10896515B1 (en) Locating missing objects using audio/video recording and communication devices
Liu et al. Vi-Fi: Associating moving subjects across vision and wireless sensors
KR101964230B1 (en) System for processing data
CN111294552A (en) Image acquisition equipment determining method and device
CN105488965A (en) Alarm method and device
CN105611341A (en) Image transmission method, device and system
CN106303017A (en) Alarm processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination