WO2021189646A1 - 智能头盔及智能识别方法 - Google Patents

智能头盔及智能识别方法 Download PDF

Info

Publication number
WO2021189646A1
WO2021189646A1 PCT/CN2020/093728 CN2020093728W WO2021189646A1 WO 2021189646 A1 WO2021189646 A1 WO 2021189646A1 CN 2020093728 W CN2020093728 W CN 2020093728W WO 2021189646 A1 WO2021189646 A1 WO 2021189646A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
information
smart helmet
recognition
smart
Prior art date
Application number
PCT/CN2020/093728
Other languages
English (en)
French (fr)
Inventor
刘若鹏
栾琳
Original Assignee
深圳光启超材料技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳光启超材料技术有限公司 filed Critical 深圳光启超材料技术有限公司
Publication of WO2021189646A1 publication Critical patent/WO2021189646A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • AHUMAN NECESSITIES
    • A42HEADWEAR
    • A42BHATS; HEAD COVERINGS
    • A42B3/00Helmets; Helmet covers ; Other protective head coverings
    • A42B3/04Parts, details or accessories of helmets
    • A42B3/0406Accessories for helmets
    • AHUMAN NECESSITIES
    • A42HEADWEAR
    • A42BHATS; HEAD COVERINGS
    • A42B3/00Helmets; Helmet covers ; Other protective head coverings
    • A42B3/04Parts, details or accessories of helmets
    • A42B3/30Mounting radio sets or communication systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Definitions

  • the present invention relates to the field of wearable devices, in particular to a smart helmet and a smart identification method.
  • the current security personnel are generally insufficient.
  • the security personnel work hard and have heavy burdens.
  • the equipment they carry such as helmets, has a single function and insufficient back-end system support capabilities. For example, It is not possible to better interact with the background system to quickly identify target objects in certain scenarios, such as certificates and license plates.
  • the embodiments of the present invention provide a smart helmet and a smart recognition method to at least solve the problem that the existing wearable device in the related art cannot support the smart recognition of target objects such as certificates and license plates.
  • a smart recognition method which includes: collecting image information of a target object through a camera set on a smart helmet; sending the image information of the target object to a recognition server to perform Image recognition; send the identified target object to the smart helmet management system to obtain the identity information of the target object; receive the target object’s identity information from the smart helmet management system, and pass the smart helmet’s
  • the output device displays the identity information of the target object.
  • the method before collecting the image information of the target object through the camera set on the smart helmet, the method further includes: receiving intelligently recognized voice command input through a microphone, and transmitting the voice command to the voice server for recognition, according to the recognized voice The instruction activates the recognition function of the target object.
  • the method before collecting the image information of the target object through the camera provided on the smart helmet, the method further includes: receiving an instruction input for smart recognition through the smart watch, and starting the recognition function of the target object according to the smart recognition instruction.
  • the method further includes: the smart helmet management system stores the identified target object in the cloud.
  • displaying the identity information of the target object through the output device of the smart helmet includes: displaying the identity information of the target object on the AR display interface of the smart helmet, and pairing the target object through the headset of the smart helmet Voice broadcast of the identity information of the target object.
  • the target object is at least one of the following: a certificate and a license plate.
  • a smart helmet with a smart recognition function including: a camera for setting the image information of a target object to be collected; and a smart recognition module for sending the image information of the target object To the recognition server to perform image recognition on the target object, and send the recognized target object to the smart helmet management system to obtain the identity information of the target object; an output device for receiving from the smart helmet management system The identity information of the target object, and the identity information of the target object is displayed.
  • the smart helmet further includes: a microphone for receiving intelligently recognized voice command input, transmitting the voice command to a voice server for recognition, and starting the smart recognition module according to the recognized voice command.
  • the output device includes: an AR display component, which is used to display the alarm and the position of the alarm in an AR display interface based on the map information, and a headset, which is used to broadcast the received voice content.
  • an AR display component which is used to display the alarm and the position of the alarm in an AR display interface based on the map information
  • a headset which is used to broadcast the received voice content.
  • a storage medium in which a computer program is stored, wherein the computer program is configured to execute the steps in any one of the foregoing method embodiments when running.
  • the identification function of target objects such as certificates and license plates is integrated on the helmet, and data interaction with the back-end business system is implemented to realize the intelligent recognition of target objects such as certificates and license plates, so that security The personnel's duty is more advanced and more efficient.
  • Fig. 1 is a schematic diagram of product application of an intelligent identification method according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of the system framework of a smart helmet according to an embodiment of the present invention.
  • Figure 3 is a schematic diagram of the system composition of a smart helmet according to an embodiment of the present invention.
  • Fig. 4 is a schematic diagram of an application environment of an intelligent identification method according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of a smart identification method according to an embodiment of the present invention.
  • Fig. 6 is a flowchart of a smart identification application according to an embodiment of the present invention.
  • Fig. 7 is a flowchart of an application of a license plate intelligent recognition module according to an embodiment of the present invention.
  • Fig. 8 is a flow chart of the application of a certificate intelligent identification module according to an embodiment of the present invention.
  • Figure 9 is a flow chart of interaction between a headset and a server according to an embodiment of the present invention.
  • FIG. 10 is a flowchart of interaction between a server and a rear-mounted device according to an embodiment of the present invention.
  • Fig. 11 is a schematic diagram of a module structure of a smart helmet according to an embodiment of the present invention.
  • Fig. 1 is a schematic diagram of a product of a smart helmet with an information push function according to an embodiment of the present invention.
  • the smart helmet is divided into regions, and the smart helmet is divided into the outer front side L1, the outer top side L2, the outer left and right sides L3, the outer rear side L4, the inner front side L5, the inner top side L6, and the inner Seven areas on the rear side L7.
  • the outer front side L1 is the information collection area, which is used to house the camera
  • the outer top side L2 is the communication area
  • the outer rear side L4 is the energy supply area
  • the inner top side L6 is the main board and the heat dissipation area
  • the outer left and right sides L3 are the functional areas.
  • the front side L5 is the AR module and goggles area
  • the inner rear side L7 is the head lock device.
  • the technical solution of the embodiment of the present invention is applied to an information collection area on the outer front side, and images are collected by a camera in the information collection area for a target object such as a document or license plate to be recognized.
  • smart helmets can also provide identification functions such as ID card recognition, driver's license and license plate.
  • Supporting layer In addition to the smart helmets and smart watches in the hardware part, it also includes a back-end smart helmet management system and a third-party Internet application service platform to provide hardware and service support for the realization of the intelligent wearable smart helmet.
  • the wearable smart helmet system will be connected to the system of the relevant platform and the database of identification and retrieval through cloud services and back-end servers, so as to truly realize the intelligence and informationization of the wearable smart helmet.
  • the composition of the smart helmet system is shown in Figure 3.
  • the smart helmet system composed of smart helmets, smart watches, and smart helmet management systems improves the efficiency of the smart helmet’s collection and recognition of target objects in actual work scenarios.
  • a method for intelligently identifying a certificate or a license plate is provided.
  • the intelligent identification method can be applied to, but is not limited to, the application environment as shown in FIG. 4.
  • the smart helmet 102 runs an APP application, and the smart helmet 102 includes a camera and a microphone.
  • the smart helmet 102 can receive the user's voice instruction through a microphone, respond to the voice instruction, control the camera to take a picture of the certificate or license plate to be recognized, and upload the photo to the back-end server 104 via the network to identify target objects such as the certificate or license plate. Correct and obtain the corresponding identity information.
  • the back-end server 104 may be an image recognition server, a voice server for voice commands, an identity database of a public security system, a license plate information database, etc.
  • the recognition result is delivered to the smart helmet 102 for analysis and use by the wearer of the smart helmet 102.
  • the embodiment of the present invention does not limit this, and the above is only an example, and the embodiment of the present application does not limit this here.
  • FIG. 5 is a flowchart of a smart recognition method according to an embodiment of the present invention. As shown in FIG. 5, the method mainly includes the following steps:
  • step S502 the image information of the target object is collected through the camera set on the smart helmet, for example, taking photos of ID cards, driving licenses and other certificates or license plates;
  • Step S504 Send the image information of the target object to a recognition server to perform image recognition on the target object;
  • Step S506 sending the identified target object to the smart helmet management system to obtain the identity information of the target object;
  • step S508 the identity information of the target object is received from the smart helmet management system, and the identity information of the identified target object can be displayed through the output device of the smart helmet.
  • the identification function of target objects such as certificates and license plates is integrated on the helmet, and data interaction with the back-end business system is realized to realize the intelligent recognition of certificates and license plates, so that security personnel can perform their duties. More advanced and more efficient.
  • the smart helmet may receive an instruction sent by the user, and activate the smart recognition function according to the received instruction.
  • the instruction reception can be achieved by the following implementations: 1) Receive intelligently recognized voice instruction input through a microphone, and transmit the voice instruction to the voice server for recognition, and activate the recognition function of the target certificate or license plate according to the recognized voice instruction. 2) Receive the instruction input of the intelligent recognition through the smart watch, and activate the recognition function of the target certificate or the license plate according to the instruction of the intelligent recognition.
  • the smart helmet uploads the ID card, driver's license or license plate photo to the identification service provider for comparison, and obtains the corresponding ID card, driver's license number or license plate number.
  • the application service obtains corresponding ID card or license plate information from the identification information database according to the ID card, driver's license number, or license plate number.
  • the method may further include: the smart helmet management system stores the recognized target certificate or license plate in the cloud.
  • the identity information of the target certificate or license plate can be displayed on the AR display interface of the smart helmet, and the identity information of the target certificate or license plate can be performed through the headset of the smart helmet. Voice broadcast.
  • the voice command sent by the police officer is received through the smart helmet; the smart recognition function is activated in response to the voice command, and the camera of the smart helmet is controlled to shoot the certificate or license plate to be recognized, and communicate with the back-end server Perform data interaction to identify the certificate or license plate.
  • Fig. 6 is a schematic diagram of the process of applying the intelligent recognition function of the smart helmet. The interaction logic relationship in this embodiment is shown in Table 1:
  • a license plate recognition module is integrated in the APP application of the smart helmet, which can capture the license plate through the camera, recognize the license plate, and return detailed vehicle information, for example, including: vehicle registration information, owner information, and whether the vehicle is from an epidemic Information such as serious area information.
  • the recognition system includes a helmet terminal, an image recognition server, an application server, and a car-one-speed system. Taking epidemic prevention and control as an application scenario as an example, the interaction process between devices is as follows:
  • Step S701 At the epidemic prevention and control bayonet, the prevention and control personnel receive the license plate recognition instruction through the helmet they are wearing to start the license plate recognition;
  • Step S702 the helmet side calls the camera to take a picture of the license plate to be recognized, captures the image frame and uploads it to the image recognition server (for example, the Ali server);
  • the image recognition server for example, the Ali server
  • Step S703 the image recognition server recognizes the license plate picture and returns the recognition result to the helmet end;
  • Step S704 the helmet terminal judges whether the license plate is recognized according to the recognition result, if yes, execute step S705, if otherwise, it prompts that there is no vehicle information;
  • Step S705 the helmet end sends the recognized license plate number to the application server
  • Step S706 the application server accesses the one-car-one-speed system interface with the license plate number as a parameter
  • step S707 the one-car-one-speed system queries the database for detailed information of the license plate number, for example, including the owner's information, vehicle registration information, the driving route of the vehicle, and whether the vehicle is from a severely epidemic area, and other information, and returns it to the application server ;
  • Step S708 the application server determines whether there is vehicle information, if not, execute step S709, if yes, execute step S710;
  • Step S709 prompting the helmet end that there is no vehicle information
  • Step S710 feedback the relevant information of the vehicle to the helmet terminal, for example, the owner's information, vehicle registration information, the driving route of the vehicle, and whether the vehicle is from an epidemic area, etc.;
  • Step S711 the helmet terminal receives the relevant information of the vehicle returned from the application server, and displays the relevant information of the vehicle to the prevention and control personnel, for example, the registration information of the owner of the vehicle is displayed through the AR display component, and whether the vehicle is from an epidemic area .
  • related information can also be broadcast through headphones. Therefore, the prevention and control personnel can quickly screen the vehicle based on the vehicle-related information fed back in the background.
  • a certificate recognition module is integrated in the APP application of the smart helmet, which can capture the image information of the certificate through the camera, for example, an ID card or a driving license, etc., to identify the certificate, and return the identity of the person Detailed information, as well as the person’s source information, travel information, etc. For example, whether the person is from an epidemic area, the means of transportation that the person is riding in, etc.
  • the recognition system includes a helmet terminal, an image recognition server, an application server, and a one-person, one-file system. Taking epidemic prevention and control as an application scenario as an example, the interaction process between devices is as follows:
  • Step S801 at the epidemic prevention and control bayonet, the prevention and control personnel receive a certificate recognition instruction (for example, an ID card or a driver's license) through the helmet they are wearing to start the certificate recognition;
  • a certificate recognition instruction for example, an ID card or a driver's license
  • Step S802 the helmet side calls the camera to take a picture of the ID of the person to be identified, grabs the image frame and uploads it to the image recognition server (for example, the Ali server);
  • the image recognition server for example, the Ali server
  • Step S803 The image recognition server recognizes the credential picture and returns the recognition result to the helmet end;
  • Step S804 the helmet terminal judges whether the certificate is recognized according to the recognition result, if yes, execute step S505, if otherwise, it prompts the information of the person without the certificate;
  • Step S805 the helmet terminal sends the recognized personnel information to the application server;
  • Step S806 the application server accesses the one-person-one-file system interface using the certificate number as a parameter
  • step S807 the one-person-one-file system queries the database for the detailed information of the certificate number, and obtains the personnel information of the certificate, for example, the identity information of the personnel, and the source information and travel information of the personnel, for example, whether they are from the epidemic area, The transportation used, etc., and return to the application server;
  • the personnel information of the certificate for example, the identity information of the personnel, and the source information and travel information of the personnel, for example, whether they are from the epidemic area, The transportation used, etc., and return to the application server;
  • Step S808 The application server judges whether there is personnel information of the certificate, if not, execute step S809, if yes, execute step S810;
  • Step S809 prompting the helmet end that there is no such person information
  • step S810 the person information corresponding to the certificate is fed back to the helmet terminal, for example, the person's identity information, and the person's source information and travel information, etc. For example, whether they are from an epidemic area, the means of transportation, etc.;
  • step S811 the helmet terminal receives the personnel information returned from the application server, and displays the personnel information to the epidemic prevention and control personnel, for example, the personnel's identity information, and the personnel's source information and travel information. For example, whether you are from an epidemic area, the means of transportation you took, etc. Therefore, epidemic prevention and control personnel can quickly and efficiently screen relevant personnel based on the personnel information fed back from the background.
  • the epidemic prevention and control personnel for example, the personnel's identity information, and the personnel's source information and travel information. For example, whether you are from an epidemic area, the means of transportation you took, etc. Therefore, epidemic prevention and control personnel can quickly and efficiently screen relevant personnel based on the personnel information fed back from the background.
  • FIG. 9 is a flowchart of an interaction method provided according to an embodiment of the present invention. As shown in FIG. 9, the method in this embodiment Including the following steps:
  • S904 When the temperature information exceeds a preset threshold, call the head-mounted device to obtain the characteristic information of the target object, and send the characteristic information to the server.
  • the characteristic information is used to instruct the server to obtain the action track information of the target object in a preset historical period according to the characteristic information, and generate confirmation information according to the action track information and the preset epidemic prevention track information;
  • S906 The headset receives the confirmation information returned by the server.
  • the head-mounted device involved in this embodiment will be described below. It should be further explained that the head-mounted devices involved in the interactive method of the epidemic prevention system in this embodiment include, but are not limited to, smart helmets, smart glasses and other head-mounted devices that can be worn by users.
  • the head-mounted devices listed below are only It is an implementation carrier of the interactive method of the epidemic prevention system in this embodiment, and is not a necessary equipment to implement this embodiment.
  • the temperature information of the target object can be acquired by the head-mounted device by the temperature measurement equipment set in the functional area of the head-mounted device, for example, a thermal imaging temperature measurement module, an infrared temperature measurement module, etc., to perform temperature measurement on the target object. Measurement, and then complete the acquisition of temperature information of the target object.
  • the temperature measurement equipment set in the functional area of the head-mounted device, for example, a thermal imaging temperature measurement module, an infrared temperature measurement module, etc.
  • the head-mounted device sends the characteristic information to the server.
  • the communication module set in the communication area of the head-mounted device such as 4G/5G module, WiFi module, etc., can be used to realize the information between the head-mounted device and the server. Interaction, and then complete sending the characteristic information to the server.
  • the determination of whether the temperature information exceeds a preset threshold may be determined by a processor mounted on the head-mounted device body in an alternative embodiment, and in another In an alternative embodiment, the head-mounted device may send the temperature information to the server or the cloud for comparison processing with the threshold, and instruct the head-mounted device to perform corresponding processing according to the comparison result, which is not limited in the present invention.
  • the aforementioned preset threshold may be 37.3°C.
  • the feature information of the target object is information indicating the identity of the target object, for example, the facial image of the target object, the certificate ID of the target object, and so on.
  • the head-mounted device acquiring characteristic information of the target object may be captured by a camera in the collection area of the head-mounted device to obtain the characteristic information of the target object.
  • the facial information can be directly sent to the server as the feature information, or the head-mounted device can recognize the target object’s ID, such as the ID number, etc. through the facial image, and then
  • the certificate ID is sent to the server as characteristic information, which is not limited in the present invention.
  • the server After the server receives the characteristic information, it can obtain the action track information of the target object indicated by the characteristic information in a preset historical period, and generate confirmation information based on the action track information and the preset epidemic prevention track information.
  • the aforementioned action trajectory information indicates the trajectory of an action generated by the target object in a preset historical period, such as 14 days; in an optional embodiment, the aforementioned action trajectory information includes at least one of the following: target object The transportation information, the city information of the target object, and the residence information of the target object.
  • the above-mentioned transportation information is the transportation used by the target object, for example, airplanes, trains, etc.
  • the server can obtain the transportation used by the target object in the preset historical period by connecting with relevant transportation department data.
  • the time and frequency of transportation is the time and frequency of transportation.
  • the city information of the target object mentioned above is the city that the target object passes through, and the server may cooperate with the accommodation registration information of the target object based on the vehicle information to obtain the cities passed by the target object in the preset historical period.
  • the residential address information of the target object mentioned above is the residential address or permanent residence of the target object.
  • any information that can record its action trajectory generated by the target object in a preset historical period is equivalent to the aforementioned action trajectory information.
  • the above-mentioned epidemic prevention trajectory information indicates the trajectory of actions produced by the confirmed object of the target epidemic situation in the current epidemic prevention scenario; in an optional embodiment, the above-mentioned epidemic prevention trajectory information includes at least one of the following: the transportation means of the confirmed object Information, city information of the diagnosed object, and residence information of the diagnosed object.
  • the above-mentioned transportation information is the transportation used by the confirmed object, such as airplanes, trains, etc.
  • the server can obtain the confirmed object in a certain period by connecting with relevant transportation department data (for example, self-diagnosed 14 days before the date) the time and frequency of the transportation used.
  • relevant transportation department data for example, self-diagnosed 14 days before the date
  • the above-mentioned city information of the confirmed object is the city passed by the confirmed object, and the server can cooperate with the accommodation registration information of the confirmed object on the basis of the transportation information to obtain the cities passed by the confirmed object in a certain period.
  • the above-mentioned residential address information of the confirmed object is the residential address or permanent residence of the confirmed object.
  • any information that can record their movement trajectory produced by the confirmed object in a certain period is equivalent to the above-mentioned epidemic prevention trajectory information.
  • the above confirmation information includes first confirmation information and second confirmation information
  • the first confirmation information is used to indicate that there is an intersection between the action track information and the epidemic prevention track information
  • the second confirmation information is used to indicate that there is no intersection between the action track information and the epidemic prevention track information
  • the server can return the first confirmation message to the headset to indicate People are currently at a higher risk of infection. Otherwise, the server can return the second confirmation message to the headset to indicate that the current person has a low risk of infection.
  • the interactive method of the epidemic prevention system in this embodiment since the temperature information of the target object can be acquired in the detection process through the head-mounted device, when the temperature information exceeds a preset threshold, the characteristics of the target object can be acquired Information, and send the characteristic information to the server to instruct the server to obtain the target object’s movement track information in a preset historical period according to the characteristic information, and according to the movement track information and the preset epidemic prevention
  • the trajectory information generates confirmation information and returns to the headset. Therefore, the interactive method of the epidemic prevention system in this embodiment can solve the problem of inconvenience in the detection operation in the epidemic prevention detection process in the related technology, and the epidemic prevention personnel cannot effectively evaluate the risk of infection of the detected person during the detection process, so as to achieve improvement.
  • the detection efficiency of the epidemic prevention personnel at the same time, the effect that the epidemic prevention personnel can effectively evaluate the risk of infection of the detected personnel.
  • the interactive method of the epidemic prevention system in this embodiment can be applied to front-line epidemic prevention and control scenarios.
  • the headset in this embodiment can be worn by prevention and control personnel, and through the communication between the headset and the server Interaction, so that the prevention and control personnel can know whether the currently detected target object has high-risk behaviors in time, and then effectively evaluate the risk of the target object being infected; on this basis, the prevention and control personnel can take corresponding measures in time to effectively avoid the current target The subject's possible spread of the epidemic, and effective control of the epidemic.
  • Epidemic prevention personnel wear a headset to detect target object A.
  • the thermal imaging module in the headset detects that the temperature of target object A reaches 37.4°C, which exceeds the preset threshold of 37.3°C
  • the headset automatically passes the camera Scan the facial image of target A and upload the facial image to the server.
  • the server After the server recognizes the ID number of target A based on the facial image, it further reads the trajectory of target A in the past 14 days, including: The target object A took the high-speed train with Z111 on M day; the server compares the movement trajectory of the above-mentioned target A with the movement trajectory of the confirmed object stored in the server database, and found that a certain confirmed object also took Z111 on M day.
  • the server can determine that the target object A is at a higher risk of infection, and the information will be returned to the headset. After the epidemic prevention personnel obtain the information, they can follow-up processing of the target object A according to the process and avoid it in time Possible infection spread behavior of target A.
  • the interactive method of the epidemic prevention system in this embodiment can also be applied to other scenarios.
  • the method in this embodiment further includes:
  • the headset obtains on-site information and sends the on-site information to the server; where the on-site information is used to instruct the server to query in a preset database based on the on-site information and generate result information;
  • the headset obtains the result information returned by the server.
  • the above-mentioned on-site information can be crime scene information, such as personnel identification information, item information, etc., or medical field information, such as personnel identification information, measurement data information, medication record information, etc.
  • the staff can pass The headset extracts on-site information in a non-contact manner to obtain on-site information.
  • the server After the server receives the field information, it can query the field information in the preset database. For example, when the field information is the measurement data information of the medical field, it can query whether the measurement data belongs to the normal range in the personnel database, and then generate The result information can be returned to the staff so that the staff can perform follow-up diagnosis processing.
  • FIG. 10 is a flowchart of an interaction method provided according to an embodiment of the present invention. As shown in FIG. 10, the method in this embodiment includes the following steps:
  • S1004 Obtain the action trajectory information of the target object in a preset historical period according to the characteristic information, and generate confirmation information according to the action trajectory information and the preset epidemic prevention trajectory information;
  • the generating of confirmation information based on the action track information and preset epidemic prevention track information in step S1002 may include:
  • the first confirmation information is generated; or,
  • the second confirmation information is generated.
  • the aforementioned action track information includes at least one of the following:
  • the vehicle information of the target object The vehicle information of the target object, the city information of the target object, and the residence information of the target object.
  • the aforementioned epidemic prevention track information includes at least one of the following:
  • the transportation information of the diagnosed object the city information of the diagnosed object, and the residence information of the diagnosed object.
  • the embodiments of the present invention and optional embodiments provide a smart helmet that integrates cloud computing , Big data, Internet of Things, communications, artificial intelligence and augmented reality technology in one intelligent, information-based wearable equipment.
  • the smart helmet is connected to related background systems through Bluetooth connection, voice input, manual input and other control methods, which can realize intelligent voice, intelligent image/video recognition and other functions, which can effectively improve the work efficiency of security personnel and enhance wearing comfort and safety , And finally achieve the goal of "three modernizations", namely:
  • the method according to the above embodiment can be implemented by means of software plus the necessary general hardware platform, of course, it can also be implemented by hardware, but in many cases the former is Better implementation.
  • the technical solution of the present invention essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes a number of instructions to make a terminal device (for example, a smart helmet with an integrated processor) execute the method described in each embodiment of the present invention.
  • a smart helmet with intelligent recognition function is also provided.
  • the smart helmet is used to implement the above-mentioned embodiments and preferred implementations, and what has been explained will not be repeated.
  • the term “module” or “unit” may be a combination of software and/or hardware that implements a predetermined function.
  • the devices described in the following embodiments are preferably implemented by software, implementation by hardware or a combination of software and hardware is also possible and conceived.
  • FIG. 9 is a schematic diagram of functional modules of a smart helmet according to an embodiment of the present invention.
  • the smart helmet includes a camera 10, an intelligent recognition module 20 and an output device 30.
  • the camera 10 is used to set and collect the image information of the target certificate or license plate
  • the intelligent recognition module 20 is configured to send the image information of the target certificate or license plate to the recognition server to perform image recognition on the target certificate or license plate, and send the recognized target certificate or license plate to the smart helmet management system for Obtain the identity information of the target certificate or license plate;
  • the output device 30 is used to receive and display the identity information of the target certificate or license plate from the smart helmet management system.
  • the smart helmet further includes: a microphone 40 for receiving smart recognition voice command input, and transmitting the voice command to a voice server for recognition, and starting the smart recognition module according to the recognized voice command.
  • the output device 30 includes: an AR display component 31, configured to display the alarm and the location of the alarm in an AR display interface based on the map information, and a headset 32, configured to broadcast the received voice content.
  • an AR display component 31 configured to display the alarm and the location of the alarm in an AR display interface based on the map information
  • a headset 32 configured to broadcast the received voice content.
  • the embodiment of the present invention also provides a storage medium in which a computer program is stored, wherein the computer program is configured to execute the steps in any one of the foregoing method embodiments when running.
  • the above-mentioned storage medium may include but is not limited to: U disk, Read-Only Memory (Read-Only Memory, ROM for short), Random Access Memory (Random Access Memory, RAM for short), Various media that can store computer programs, such as mobile hard disks, magnetic disks, or optical disks.
  • modules or steps of the present invention can be implemented by a general computing device, and they can be concentrated on a single computing device or distributed in a network composed of multiple computing devices.
  • they can be implemented with program codes executable by a computing device, so that they can be stored in a storage device for execution by the computing device, and in some cases, can be executed in a different order than here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本发明提供了一种智能识别方法及智能头盔,该智能识别方法包括:通过智能头盔上设置的摄像头采集目标对象的图像信息;将所述目标对象的图像信息发送至识别服务器对所述目标对象进行图像识别;将识别出的所述目标对象发送至智能头盔管理系统以获取所述目标对象的身份信息;通过所述智能头盔的输出装置展示所述目标对象的身份信息。在本发明中,在头盔上集成对例如证件和车牌等目标对象的智能识别功能,并通过与后台业务系统进行数据交互,从而实现对证件和车牌等目标对象的识别智能化,使得安保人员的执勤更先进、更高效。

Description

智能头盔及智能识别方法 技术领域
本发明涉及穿戴设备领域,具体而言,涉及一种智能头盔及智能识别方法。
背景技术
在当前社会发展背景中,目前安保人员普遍不足,安保人员工作强度大,负担重,其随身携带的装备,例如,头盔,在一定程度上存在功能单一,后台系统支撑能力不足等缺陷,例如,无法更好地与后台系统进行数据的交互,以对一些特定场景下的目标对象,例如,证件、车牌等进行快速识别。
因此,针对现有的头盔装备存在的上述问题,急需提供一种能够支持对证件和车牌等目标对象进行智能识别的智能头盔。
发明内容
本发明实施例提供了一种智能头盔及智能识别方法,以至少解决相关技术中现有的穿戴设备无法支持证件和车牌等目标对象进行智能识别的问题。
根据本发明的一个实施例,提供了一种智能识别方法,包括:通过智能头盔上设置的摄像头采集目标对象的图像信息;将所述目标对象的图像信息发送至识别服务器对所述目标对象进行图像识别;将识别出的所述目标对象发送至智能头盔管理系统以获取所述目标对象的身份信息;从所述智能头盔管理系统接收所述目标对象的身份信息,并通过所述智能头盔的输出装置展示所述目标对象的身份信息。
可选地,通过智能头盔上设置的摄像头采集目标对象的图像信息之前,还包括:通过麦克风接收智能识别的语音指令输入,并将所述语音指令传送到语音服务器进行识别,根据识别出的语音指令启动目标对象的识别功能。
可选地,通过智能头盔上设置的摄像头采集目标对象的图像信息之前,还包括:通过智能手表接收智能识别的指令输入,并根据所述智能识别的指令启动目标对象的识别功能。
可选地,将识别出的所述目标对象发送至智能头盔管理系统以获取所述目标对象的身份信息之后,还包括:所述智能头盔管理系统将识别出的目标对象进行云存储。
可选地,通过所述智能头盔的输出装置展示所述目标对象的身份信息包括:在所述智能头盔的AR显示界面中显示所述目标对象的身份信息,并通过所述智能头盔的耳机对所述目标对象的身份信息进行语音播报。
可选地,所述目标对象为以下至少之一:证件、车牌。
根据本发明的另一个实施例,提供了一种具有智能识别功能的智能头盔,包括:摄像头,用于设置采集目标对象的图像信息;智能识别模块,用于将所述目标对象的图像信息发送至识别服务器对所述目标对象进行图像识别,并将识别出的所述目标对象发送至智能头盔管理系统以获取所述目标对象的身份信息;输出装置,用于从所述智能头盔管理系统接收所述目标对象的身份信息,并展示所述目标对象的身份信息。
可选地,所述智能头盔还包括:麦克风,用于接收智能识别的语音指令输入,并将所述语音指令传送到语音服务器进行识别,根据识别出的语音指令启动所述智能识别模块。
可选地,所述输出装置包括:AR显示部件,用于基于所述地图信息在AR显示界面中显示所述警情和警情位置,耳机,用于播报接收到的所述语音内容。
根据本发明的又一个实施例,还提供了一种存储介质,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
在本发明的上述实施例中,在头盔上集成对证件和车牌等目标对象的识别功能,并通过与后台业务系统进行数据交互,从而实现对证件和车牌等目标对象的识别智能化,使得安保人员的执勤更先进、更高效。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1是根据本发明实施例的智能识别方法的产品应用示意图;
图2是根据本发明实施例的智能头盔的系统框架示意图;
图3是根据本发明实施例的智能头盔的系统组成示意图;;
图4是根据本发明实施例的智能识别方法应用环境的示意图;
图5是根据本发明实施例的智能识别方法的流程图;
图6是根据本发明实施例的智能识别应用的流程图;
图7是根据本发明实施例的车牌智能识别模块应用的流程图;
图8是根据本发明实施例的证件智能识别模块应用的流程图;
图9是根据本发明实施例的头戴设备与服务器的交互流程图;
图10是根据本发明实施例的服务器与后头戴设备的交互流程图;
图11是根据本发明实施例的智能头盔的模块结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
为了更好的理解本发明实施例以及可选实施例的技术方案,以下对本发明实施例以及可选实施例中可能出现的应用场景进行说明,但不用于限定以下场景的应用。
图1是根据本发明实施例的具有信息推送功能的智能头盔的产品示意图。以盔体为参考基准,对智能头盔进行区域划分,将智能头盔划分为外部前侧L1、外部顶侧L2、外部左右侧L3、外部后侧L4、内部前侧L5、内部顶侧L6、内部后侧L7七个区域。外部前侧L1为信息采集区域,用来安置摄像头,外部顶侧L2为通讯区域,外部后侧L4为能源供应区域,内部顶侧L6为主板及散热区域,外部左右侧L3为功能区域,内部前侧L5为AR模组及护目镜区域,内部后侧L7为头锁装置。
可选地,本发明实施例的技术方案应用在外部前侧的信息采集区域,通过信息采集区域中的摄像头对待识别的证件或车牌等目标对象进行图像采集。
智能头盔的系统框架示意图如图2所示,具体的系统层如下:
功能层:在功能上,智能头盔除了提供最基本的防护功能外,还可提供了例如身份证识别、驾驶证和车牌等识别的功能。
支撑层:除了硬件部分的智能头盔、智能手表,还包括后台智能头盔管理系统以及第三方互联网应用服务平台,为实现穿戴式智能头盔的智能化提供硬件及服务支撑。
资源层:穿戴式智能头盔系统将通过云服务及后台服务器,与相关平台的系统与识别检索等数据库连接,真正实现了穿戴式智能头盔的智能化、信息化。
智能头盔系统组成如图3所示,通过智能头盔、智能手表、智能头盔管理系统组成的智能头盔系统,提高了智能头盔在实际工作场景中对目标对象的采集和识别效率。
根据本发明实施例的一个方面,提供了一种对证件或车牌的智能识别方法。该智能识别方法可以但不限于应用于如图4所示的应用环境中。 如图4所示,智能头盔102运行有APP应用,且智能头盔102包括摄像头和麦克风。智能头盔102可以通过麦克风接收用户的语音指令,响应所述语音指令,控制摄像头对待识别的证件或车牌进行拍摄,并通过网络将照片上传至后台服务器104对证件或车牌等目标对象进行识别,比对并获取相应的身份信息。该后台服务器104可以是图像识别服务器、语音指令的语音服务器、公安系统的身份数据库、车牌信息数据库等,识别的结果下发到智能头盔102,供智能头盔102的佩戴者进行分析使用。本发明实施例对此不作限定,以上仅为一种示例,本申请实施例在此不作限定。
在本实施例中提供了一种智能头盔的智能识别方法,图5是根据本发明实施例的智能识别方法的流程图,如图5所示,该方法主要包括如下步骤:
步骤S502,通过智能头盔上设置的摄像头采集目标对象的图像信息,例如对身份证、驾驶证等证件或车牌进行拍照;
步骤S504,将所述目标对象的图像信息发送至识别服务器对所述目标对象进行图像识别;
步骤S506,将识别出的所述目标对象发送至智能头盔管理系统以获取所述目标对象的身份信息;
步骤S508,从智能头盔管理系统接收所述目标对象的身份信息,并可通过所述智能头盔的输出装置展示该识别出目标对象的身份信息。
在本发明的上述实施例中,在头盔上集成对证件和车牌等目标对象的识别功能,并通过与后台业务系统进行数据交互,从而实现对证件和车牌的识别智能化,使得安保人员的执勤更先进、更高效。
在上述实施例的步骤S502之前,智能头盔可接收用户发送的指令,根据所接收的指令启动智能识别功能。该指令接收可通过以下实现方式实现:1)通过麦克风接收智能识别的语音指令输入,并将所述语音指令传送到语音服务器进行识别,根据识别出的语音指令启动目标证件或车牌的识别功能。2)通过智能手表接收智能识别的指令输入,并根据所述智能识别的指令启动目标证件或车牌的识别功能。
可选地,在上述实施例的步骤S504中,智能头盔将身份证、驾驶证或车牌照片上传到识别服务供应商进行比对,并获取相应的身份证、驾驶证编号或者车牌号码。
可选地,在上述实施例的步骤S506中,应用服务根据身份证、驾驶证编号或者车牌号码到识别信息数据库中获取相应的身份证或车牌信息。
可选地,在上述实施例的步骤S506之后,还可包括:所述智能头盔管理系统将识别出的目标证件或车牌进行云存储。
在上述实施例的步骤S508中,可在所述智能头盔的AR显示界面中显示所述目标证件或车牌的身份信息,并通过所述智能头盔的耳机对所述目标证件或车牌的身份信息进行语音播报。
为了更好的理解上述多媒体信息的确定流程,以下结合一可选实施例进行说明,但不用于限定本发明实施例的技术方案。
在本发明可选实施例中,通过智能头盔接收警员所发送的语音指令;响应所述语音指令启动智能识别功能,控制智能头盔的摄像头对待识别的证件或车牌进行拍摄,并与后台的服务器进行数据交互以对所述证件或车牌进行识别。图6智能头盔进行智能识别功能应用的流程示意图,本实施例中的交互逻辑关系如下表1:
表1
Figure PCTCN2020093728-appb-000001
Figure PCTCN2020093728-appb-000002
在一可选实施例中,智能头盔的APP应用中集成了车牌识别模块,可通过摄像头摄取车牌,识别车牌,并返回车辆详细信息,例如,包含:车辆登记信息、车主信息、车辆是否来自疫情严重区域信息等信息。如图7所示,在本实施例中,该识别系统包括头盔端、图像识别服务器、应用服务器、和一车一档系统。下面以疫情防控为应用场景为例,各装置之间的交互流程如下:
步骤S701,在疫情防控卡口,防控人员通过所佩戴的头盔端接收车牌识别指令开始车牌识别;
步骤S702,头盔端调取摄像头对待识别的车牌进行拍照,抓取图像帧并上传至图像识别服务器(例如,阿里服务器);
步骤S703,图像识别服务器对车牌图片进行识别并将识别结果返回 头盔端;
步骤S704,头盔端根据识别结果判断是否识别到车牌,如果是,则执行步骤S705,如果否则提示无车辆信息;
步骤S705,头盔端将识别的车牌号码发送至应用服务器;
步骤S706,应用服务器以车牌号码为参数访问一车一档系统接口;
步骤S707,一车一档系统在数据库中查询车牌号码的详细信息,例如,包括车主信息、车辆登记信息、车辆的行驶路线,以及该车辆是否来自疫情严重区域信息等信息,并返回给应用服务器;
步骤S708,所述应用服务器判断是否有车辆信息,如果否,执行步骤S709,如果有,则执行步骤S710;
步骤S709,向头盔端提示无车辆信息;
步骤S710,向头盔端反馈车辆的相关信息,例如,车主信息、车辆登记信息、车辆的行驶路线,以及该车辆是否来自疫区等信息;
步骤S711,所述头盔端接收从应用服务器返回的该车辆的相关信息,并对防控人员显示该车辆相关信息,例如,通过AR显示组件显示该车主的登记信息,以及该车是否来自疫区。同时,也可通过耳机进行播报相关信息。因此,防控人员可根据后台反馈的车辆相关信息实现对该车辆的快速筛查。
在一可选实施例中,智能头盔的APP应用中集成了证件识别模块,可通过摄像头摄取证件的图像信息,例如,身份证或驾驶证等,对证件进行身份识别,并返回该人员的身份详细信息,以及该人员的来源信息、行程信息等。例如,该人员是否来自疫区,其所乘坐的交通工具等。如图8所示,在本实施例中,该识别系统包括头盔端、图像识别服务器、应用服务器、和一人一档系统。下面以疫情防控为应用场景为例,各装置之间的交互流程如下:
步骤S801,在疫情防控卡口,防控人员通过所佩戴的头盔端接收证件识别指令(例如,身份证或驾驶证)开始证件识别;
步骤S802,头盔端调取摄像头对待识别人员的证件进行拍照,抓取 图像帧并上传至图像识别服务器(例如,阿里服务器);
步骤S803,图像识别服务器对证件图片进行识别并将识别结果返回头盔端;
步骤S804,头盔端根据识别结果判断是否识别到证件,如果是,则执行步骤S505,如果否则提示无该证件的人员信息;
步骤S805,头盔端将识别的人员信息发送至应用服务器;
步骤S806,应用服务器以证件号码为参数访问一人一档系统接口;
步骤S807,一人一档系统在数据库中查询证件号码的详细信息,获取该证件的人员信息,例如,该人员的身份信息,以及该人员的来源信息、行程信息等,例如,是否来自疫区,所乘坐的交通工具等,并返回给应用服务器;
步骤S808,所述应用服务器判断是否有该证件的人员信息,如果否,执行步骤S809,如果有,则执行步骤S810;
步骤S809,向头盔端提示无此人员信息;
步骤S810,向头盔端反馈与该证件对应的人员信息,例如,该人员的身份信息,以及该人员的来源信息和行程信息等。例如是否来自疫区,所乘坐的交通工具等;
步骤S811,所述头盔端接收从应用服务器返回的人员信息,并向疫情防控人员显示该人员信息,例如,该人员的身份信息,以及该人员的来源信息和行程信息等。例如是否来自疫区,所乘坐的交通工具等。因此,疫情防控人员可根据后台反馈的人员信息快速、高效地实现对相关人员进行筛查。
下面结合疫情防控场景,对智能头盔与后台服务器之间的交互进行描述。在本实施例中,并不限定于智能头盔,也可以是其它的头戴设备,图9是根据本发明实施例提供的交互方法的流程图,如图9所示,本实施例中的方法包括如下步骤:
S902,当目标对象(人员)通过疫情卡口时,防控人员可通过头戴设备获取目标对象的温度信息;
S904,在温度信息超过预设阈值的情形下,调用头戴设备获取目标对象的特征信息,并发送特征信息至服务器。其中,特征信息用于指示服务器根据特征信息获取目标对象在预设的历史周期内的行动轨迹信息,并根据行动轨迹信息与预设的防疫轨迹信息生成确认信息;
S906,头戴设备接收服务器返回的确认信息。
为进一步说明本实施例中的防疫系统的交互方法,以下对本实施例中涉及的头戴设备进行说明。需要进一步说明的是,本实施例中的防疫系统的交互方法中涉及的头戴设备包括但不限于智能头盔,智能眼镜等可供使用者穿戴的头戴式设备,以下列举的头戴设备仅为本实施例中的防疫系统的交互方法的一种实现载体,而并非实现本实施例的必要设备。
在上述步骤S902中,头戴设备获取目标对象的温度信息即可由头戴设备的功能区域中设置的测温设备,例如,热成像测温模块,红外测温模块等,以对于目标对象进行温度测量,进而完成目标对象的温度信息的获取。
在上述步骤S904中,头戴设备发送特征信息至服务器即可由头戴设备的通讯区域中设置的通讯模块,例如,4G/5G模块,WiFi模块等,以实现头戴设备与服务器之间的信息交互,进而完成将特征信息发送至服务器。
需要进一步说明的是,上述步骤S904中,对于温度信息是否超过预设阈值的判断,在一可选实施例中,可以由搭载在头戴设备本体之上的处理器进行判断,在另一个可选实施例中,可以由头戴设备将温度信息发送至服务器或云端以与阈值进行比较处理,并根据比较结果指示头戴设备进行相应处理,本发明对此不作限定。上述预设阈值可以为37.3℃。
需要进一步说明的是,上述目标对象的特征信息即为指示目标对象的身份的信息,例如,目标对象的面部图像,目标对象的证件ID等。在一可选实施例中,上述步骤S902中,头戴设备获取目标对象的特征信息可以由头戴设备的采集区域中的摄像头对目标对象进行拍摄,以获取目标对象的特征信息。获取目标对象的特征信息,如面部信息后,可直接将该面部信息作为特征信息发送至服务器,也可由头戴设备通过该面部图像识别出目标对象的证件ID,如身份证号码等,再将证件ID作为特征信息发送 至服务器,本发明对此不作限定。
服务器接收到特征信息信息后,即可获取该特征信息所指示的目标对象在预设的历史周期内的行动轨迹信息,并根据行动轨迹信息与预设的防疫轨迹信息生成确认信息。
需要进一步说明的是,上述行动轨迹信息即指示目标对象在预设的历史周期,如14天内所产生行动的轨迹;在一可选实施例中,上述行动轨迹信息包括以下至少之一:目标对象的交通工具信息、目标对象的城市信息、目标对象的居住地信息。
需要进一步说明的是,上述交通工具信息即为目标对象所搭乘的交通工具,例如,飞机、火车等,服务器可通过与相关交通部门数据的联通以获取目标对象在预设历史周期内所搭乘的交通工具的时间、班次等。上述目标对象的城市信息即为目标对象所经过的城市,服务器可在交通工具信息的基础上,配合目标对象的住宿登记信息等,以获取目标对象在预设历史周期内所经过的城市。上述目标对象的居住地信息即为目标对象的住址或常驻地。
需要进一步说明的是,目标对象在预设的历史周期内产生的任何可记录其行动轨迹的信息均与上述行动轨迹信息等同。
需要进一步说明的是,上述防疫轨迹信息即指示当前防疫场景下目标疫情的确诊对象所产生行动的轨迹;在一可选实施例中,上述防疫轨迹信息包括以下至少之一:确诊对象的交通工具信息、确诊对象的城市信息、确诊对象的居住地信息。
需要进一步说明的是,上述交通工具信息即为确诊对象所搭乘的交通工具,例如,飞机、火车等,服务器可通过与相关交通部门数据的联通以获取确诊对象在一定周期内(例如,自确诊日起前14天)所搭乘的交通工具的时间、班次等。上述确诊对象的城市信息即为确诊对象所经过的城市,服务器可在交通工具信息的基础上,配合确诊对象的住宿登记信息等,以获取确诊对象在一定周期内所经过的城市。上述确诊对象的居住地信息即为确诊对象的住址或常驻地。
需要进一步说明的是,确诊对象在一定周期内产生的任何可记录其行 动轨迹的信息均与上述防疫轨迹信息等同。
在一可选实施例中,上述确认信息包括第一确认信息与第二确认信息;
其中,第一确认信息用于指示行动轨迹信息与防疫轨迹信息之间存在交集,第二确认信息用于指示行动轨迹信息与防疫轨迹信息之间不存在交集。
需要进一步说明的是,上述行动轨迹信息与防疫轨迹信息之间存在交集,即行动轨迹信息所指示的目标对象的行动轨迹与防疫轨迹信息所指示的确诊对象的行动轨迹之间产生了一定重合,例如,目标对象与确诊对象在相同时间乘坐了同一班次的飞机,或目标对象与确诊对象的居住地相近等等;在上述情形下,服务器即可向头戴设备返回第一确认信息,以指示当前人员感染风险较高。反之,服务器即可向头戴设备返回第二确认信息,以指示当前人员感染风险较低。
通过本实施例中的防疫系统的交互方法,由于可在通过头戴设备进行检测过程中获取目标对象的温度信息,在所述温度信息超过预设阈值的情形下,获取所述目标对象的特征信息,并发送所述特征信息至服务器,以指示所述服务器根据所述特征信息获取所述目标对象在预设的历史周期内的行动轨迹信息,并根据所述行动轨迹信息与预设的防疫轨迹信息生成确认信息并返回至头戴设备。因此,本实施例中的防疫系统的交互方法可以解决相关技术中防疫检测过程中的检测操作存在不便,并且防疫人员在进行检测过程中无法有效评估被检测人员感染的风险的问题,以达到改善防疫人员的检测效率,同时令防疫人员可对被检测人员感染的风险进行有效评估的效果。
具体而言,本实施例中的防疫系统的交互方法可运用于一线疫情的防控场景下,本实施例中的头戴设备可供防控人员佩戴,并通过头戴设备与服务器之间的交互,以令防控人员及时获知当前被检测的目标对象是否存在高危行为,进而对目标对象被感染的风险进行有效评估;在此基础上,防控人员可及时采取相应措施以有效避免当前目标对象可能的疫情扩散行径,进而对疫情进行有效控制。
以下通过具体示例的方式对本实施例中的防疫系统的交互方法进行 进一步说明:
防疫人员佩戴头戴设备对目标对象A进行检测,当头戴设备中的热成像模块检测至目标对象A的温度达到37.4℃,超过预设阈值37.3℃的情形下,头戴设备即自动通过摄像头扫描目标对象A的面部图像,并将该面部图像上传至服务器中,服务器根据面部图像识别到目标对象A的身份证号码后,进一步读取目标对象A在过去14天内的行动轨迹,其中包括,目标对象A于M日搭乘班次为Z111的高铁;服务器将上述目标对象A的行动轨迹与服务器数据库中存储的确诊对象的行动轨迹进行比对,查询到某确诊对象同样于M日搭乘班次为Z111的高铁,此时,服务器即可判定目标对象A的感染风险较高,即将该信息返回至头戴设备,防疫人员获取该信息后,即可根据流程对目标对象A进行后续处理,以及时避免目标对象A可能的感染扩散行为。
此外,本实施例中的防疫系统的交互方法还可适用于其它场景,在一可选实施例中,本实施例中的方法还包括:
头戴设备获取现场信息,并将现场信息发送至服务器;其中,现场信息用于指示服务器根据现场信息在预设的数据库中进行查询,并生成结果信息;
头戴设备获取服务器返回的结果信息。
需要进一步说明的是,上述现场信息可以是案发现场信息,例如人员识别信息、物品信息等,也可以是医疗现场信息,例如人员识别信息、测量数据信息、用药记录信息等,工作人员可通过头戴设备以非接触的方式对现场信息进行提取,以获取现场信息。服务器接收到现场信息后,可在预设的数据库中针对现场信息进行查询,例如,在现场信息为医疗现场的测量数据信息时,可在人员数据库中查询该测量数据是否属于正常范围,进而生成结果信息以返回至工作人员,以令工作人员可以此进行后续诊断处理。
下面结合疫情防控场景,从后台服务器的角度,对头戴设备与后台服务器之间的交互进行描述。图10是根据本发明实施例提供的交互方法的流程图,如图10所示,本实施例中的方法包括如下步骤:
S1002,接收头戴设备发送的目标对象的特征信息,其中,特征信息由头戴设备在检测到目标对象的温度信息超过预设阈值的情形下进行获取;
S1004,根据特征信息获取目标对象在预设的历史周期内的行动轨迹信息,并根据行动轨迹信息与预设的防疫轨迹信息生成确认信息;
S1006,返回确认信息至头戴设备。
在一可选实施例中,上述步骤S1002中的根据行动轨迹信息与预设的防疫轨迹信息生成确认信息,可包括:
在行动轨迹信息与防疫轨迹信息之间存在交集的情形下,生成第一确认信息;或者,
在行动轨迹信息与防疫轨迹信息之间不存在交集的情形下,生成第二确认信息。
在一可选实施例中,上述行动轨迹信息包括以下至少之一:
目标对象的交通工具信息、目标对象的城市信息、目标对象的居住地信息。
在一可选实施例中,上述防疫轨迹信息包括以下至少之一:
确诊对象的交通工具信息、确诊对象的城市信息、确诊对象的居住地信息。
通过相关应用场景以及本发明可选实施例的说明,面对当前各种安保终端设备存在的问题及不足,本发明实施例以及可选实施例提供了一种智能头盔,该智能头盔集云计算、大数据、物联网、通信、人工智能及增强现实技术于一体的智能化、信息化穿戴装备。该智能头盔通过蓝牙连接、语音输入、手动输入等操控方式,对接相关后台系统,可实现智能语音、智能图像/视频识别等功能,能有效提高安保人员的工作效率并增强佩戴舒适性和安全性,最终实现“三化”目标,即:
(1)智能化:通过现场语音、图像、视频数据即时与相关平台的后台业务系统和大数据进行交互关联,实现安保终端智能化,释放安保人员的双手,让执勤工作更先进、更高效;
(2)一体化:通过集成身体防护、信息采集输入、通信、信息反馈输出于一体,实现终端一体化,让执勤工作更安全、更便捷;
(3)人性化:通过采用高科技材料阻热降温、人体工学轻量化设计等技术,实现终端人性化,让工作人员的佩戴更舒适、更易维护。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(例如,可以是集成处理器的智能头盔)执行本发明各个实施例所述的方法。
在本实施例中还提供了一种具有智能识别功能的智能头盔,该智能头盔用于实现上述实施例及优选实施方式,已经进行过说明的不再赘述。如以下所使用的,术语“模块”或“单元”可以实现预定功能的软件和/或硬件的组合。尽管以下实施例所描述的装置较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能并被构想的。
图9是根据本发明实施例的智能头盔的功能模块示意图,如图9所示,该智能头盔包括摄像头10、智能识别模块20和输出装置30。
摄像头10,用于设置采集目标证件或车牌的图像信息;
智能识别模块20,用于将所述目标证件或车牌的图像信息发送至识别服务器对所述目标证件或车牌进行图像识别,并将识别出的所述目标证件或车牌发送至智能头盔管理系统以获取所述目标证件或车牌的身份信息;
输出装置30,用于从智能头盔管理系统接收并展示所述目标证件或车牌的身份信息。
可选地,所述智能头盔还包括:麦克风40,用于接收智能识别的语音指令输入,并将所述语音指令传送到语音服务器进行识别,根据识别出的语音指令启动所述智能识别模块。
可选地,所述输出装置30包括:AR显示部件31,用于基于所述地图信息在AR显示界面中显示所述警情和警情位置,耳机32,用于播报接 收到的所述语音内容。
本发明的实施例还提供了一种存储介质,该存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(Read-Only Memory,简称为ROM)、随机存取存储器(Random Access Memory,简称为RAM)、移动硬盘、磁碟或者光盘等各种可以存储计算机程序的介质。
显然,本领域的技术人员应该明白,上述的本发明的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,可选地,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本发明不限制于任何特定的硬件和软件结合。
以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (11)

  1. 一种智能识别方法,其特征在于,包括:
    通过智能头盔上设置的摄像头采集目标对象的图像信息;
    将所述目标对象的图像信息发送至识别服务器对所述目标对象进行图像识别;
    将识别出的所述目标对象发送至智能头盔管理系统以获取所述目标对象的身份信息;
    从所述智能头盔管理系统接收所述目标对象的身份信息,并通过所述智能头盔的输出装置展示所述目标对象的身份信息。
  2. 根据权利要求1所述的方法,其特征在于,通过智能头盔上设置的摄像头采集目标对象的图像信息之前,还包括:
    通过麦克风接收智能识别的语音指令输入,并将所述语音指令传送到语音服务器进行识别,根据识别出的语音指令启动目标对象的识别功能。
  3. 根据权利要求1所述的方法,其特征在于,通过智能头盔上设置的摄像头采集目标对象的图像信息之前,还包括:
    通过智能手表接收智能识别的指令输入,并根据所述智能识别的指令启动目标对象的识别功能。
  4. 根据权利要求1所述的方法,其特征在于,将识别出的所述目标对象发送至智能头盔管理系统以获取所述目标对象的身份信息之后,还包括:
    所述智能头盔管理系统将识别出的目标对象进行云存储。
  5. 根据权利要求1所述的方法,其特征在于,通过所述智能头盔的输出装置展示所述目标对象的身份信息包括:
    在所述智能头盔的AR显示界面中显示所述目标对象的身份信息,并通过所述智能头盔的耳机对所述目标对象的身份信息进行语音播报。
  6. 根据权利要求1所述的方法,其特征在于,所述目标对象为以下至少之一:证件、车牌。
  7. 一种智能头盔,其特征在于,包括:、
    摄像头,用于设置采集目标对象的图像信息;
    智能识别模块,用于将所述目标对象的图像信息发送至识别服务器对所述目标对象进行图像识别,并将识别出的所述目标对象发送至智能头盔管理系统以获取所述目标对象的身份信息;
    输出装置,用于从所述智能头盔管理系统接收所述目标对象的身份信息,并展示所述目标对象的身份信息。
  8. 根据权利要求7所述的智能头盔,其特征在于,还包括:
    麦克风,用于接收智能识别的语音指令输入,并将所述语音指令传送到语音服务器进行识别,根据识别出的语音指令启动所述智能识别模块。
  9. 根据权利要求7所述的智能头盔,其特征在于,还包括:
    智能手表,用于接收智能识别的指令输入,并根据所述智能识别的指令启动所述智能识别模块。
  10. 根据权利要求7所述的智能头盔,其特征在于,所述输出装置包括:
    AR显示部件,用于基于所述地图信息在AR显示界面中显示所述警情和警情位置,
    耳机,用于播报接收到的所述语音内容。
  11. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行所述权利要求1至6任一项中所述的方法。
PCT/CN2020/093728 2020-03-27 2020-06-01 智能头盔及智能识别方法 WO2021189646A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010231608.0A CN111476128A (zh) 2020-03-27 2020-03-27 智能头盔及智能识别方法
CN202010231608.0 2020-03-27

Publications (1)

Publication Number Publication Date
WO2021189646A1 true WO2021189646A1 (zh) 2021-09-30

Family

ID=71749159

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/093728 WO2021189646A1 (zh) 2020-03-27 2020-06-01 智能头盔及智能识别方法

Country Status (2)

Country Link
CN (1) CN111476128A (zh)
WO (1) WO2021189646A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114885092A (zh) * 2022-03-23 2022-08-09 青岛海尔科技有限公司 图像采集装置的控制方法和装置、存储介质及电子装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112085365A (zh) * 2020-09-01 2020-12-15 深圳市安之眼科技有限公司 一种一标四实全自动关联采集方法
CN112633143B (zh) * 2020-12-21 2023-09-05 杭州海康威视数字技术股份有限公司 图像处理系统、方法、头戴设备、处理设备及存储介质
CN113313866A (zh) * 2021-04-02 2021-08-27 上海安威士科技股份有限公司 一种远程体温检测与身份识别方法及其系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204317627U (zh) * 2014-12-30 2015-05-13 武汉华科道普轨道交通信息技术有限公司 一种带定向推送信息功能的智能头盔
CN106557744A (zh) * 2016-10-28 2017-04-05 南京理工大学 可穿戴人脸识别装置及实现方法
CN206472912U (zh) * 2016-12-29 2017-09-08 刘彬 一种带摄像头信息识别功能的警用头盔
CN108323855A (zh) * 2018-04-16 2018-07-27 亮风台(上海)信息科技有限公司 一种ar智能头盔
CN109035774A (zh) * 2018-08-16 2018-12-18 浙江海韵信息技术有限公司 一种智能头盔巡检仪的控制系统及控制方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108806153A (zh) * 2018-06-21 2018-11-13 北京旷视科技有限公司 警情处理方法、装置及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204317627U (zh) * 2014-12-30 2015-05-13 武汉华科道普轨道交通信息技术有限公司 一种带定向推送信息功能的智能头盔
CN106557744A (zh) * 2016-10-28 2017-04-05 南京理工大学 可穿戴人脸识别装置及实现方法
CN206472912U (zh) * 2016-12-29 2017-09-08 刘彬 一种带摄像头信息识别功能的警用头盔
CN108323855A (zh) * 2018-04-16 2018-07-27 亮风台(上海)信息科技有限公司 一种ar智能头盔
CN109035774A (zh) * 2018-08-16 2018-12-18 浙江海韵信息技术有限公司 一种智能头盔巡检仪的控制系统及控制方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114885092A (zh) * 2022-03-23 2022-08-09 青岛海尔科技有限公司 图像采集装置的控制方法和装置、存储介质及电子装置

Also Published As

Publication number Publication date
CN111476128A (zh) 2020-07-31

Similar Documents

Publication Publication Date Title
WO2021189646A1 (zh) 智能头盔及智能识别方法
Zhang et al. Edge video analytics for public safety: A review
US20170076140A1 (en) Wearable camera system and method of notifying person
AU2017277846B2 (en) System and method for distributed intelligent pattern recognition
CN201765513U (zh) 基于人像生物识别技术的城市安全人像布控追踪抓捕系统
US20190171740A1 (en) Method and system for modifying a search request corresponding to a person, object, or entity (poe) of interest
WO2019024414A1 (zh) 防止走失方法及终端设备
WO2015186447A1 (ja) 情報処理装置、撮影装置、画像共有システム、情報処理方法およびプログラム
JP2014085796A (ja) 情報処理装置およびプログラム
US20180270454A1 (en) Video monitoring method and device
US20160337548A1 (en) System and Method for Capturing and Sharing Content
US20140059704A1 (en) Client device, server, and storage medium
US20220217495A1 (en) Method and network storage device for providing security
CN110852306A (zh) 一种基于人工智能的安全监控系统
KR101084914B1 (ko) 차량번호 및 사람 이미지의 인덱싱 관리시스템
WO2020103620A1 (zh) 基于智能穿戴设备的警务系统
WO2021189682A1 (zh) 目标识别方法及头戴设备
WO2021189647A1 (zh) 多媒体信息的确定方法、头戴设备、存储介质及电子设备
KR20140021097A (ko) 분산처리 기반 카메라 영상 서비스 시스템 및 방법
KR102236358B1 (ko) 사회적 취약계층 보호 시스템 및 그 방법
TW201508651A (zh) 人臉偵測之雲端智慧監視系統
CN211043853U (zh) 一种执法眼镜
JP5909984B2 (ja) マップ作成システムおよびマップ作成装置
KR20160032462A (ko) 소셜 네트워크 서비스를 이용한 사회 안전망 시스템 및 방법
JP2021082912A (ja) 捜査支援システムおよび人物画像登録方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20926807

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20926807

Country of ref document: EP

Kind code of ref document: A1