CN115147911A - Smart city file information integration method, system and terminal - Google Patents

Smart city file information integration method, system and terminal Download PDF

Info

Publication number
CN115147911A
CN115147911A CN202211062446.8A CN202211062446A CN115147911A CN 115147911 A CN115147911 A CN 115147911A CN 202211062446 A CN202211062446 A CN 202211062446A CN 115147911 A CN115147911 A CN 115147911A
Authority
CN
China
Prior art keywords
information
image
driver
file
face information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211062446.8A
Other languages
Chinese (zh)
Inventor
万力
韩东明
江彬
王庆焕
赵诗文
单洪伟
李冬冬
李健
许茂
荆胜涛
王珂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Haibo Technology Information System Co ltd
Original Assignee
Shandong Haibo Technology Information System Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Haibo Technology Information System Co ltd filed Critical Shandong Haibo Technology Information System Co ltd
Publication of CN115147911A publication Critical patent/CN115147911A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application relates to a smart city file information integration method, a system and a terminal, belonging to the technical field of smart city construction, wherein the method comprises the steps of obtaining a first image of a driver on a road after entering an information obtaining area and being authorized by the driver; then extracting the current face information in the first image, judging whether the current face information is matched with the face information prestored in the archive information base or not, and if so, extracting the vehicle information in the first image; if not, storing the current face information in an archive information base; associating the vehicle information with the current face information and generating first file information; and then, after the driver gets off the vehicle, acquiring a second image of the driver, extracting gait features in the second image based on a gait recognition algorithm, then associating the gait features with the first file information, and generating second file information. This application has richened archives to in look for corresponding personnel's beneficial effect through archives.

Description

Smart city file information integration method, system and terminal
Technical Field
The application relates to the technical field of smart city construction, in particular to a smart city file information integration method, a system and a terminal.
Background
The development of smart cities is promoted by the wide application of emerging internet information technologies such as internet of things technology, novel broadband mobile communication networks and cloud computing. The development of the smart city also provides a chance for the informatization development of the files.
In daily life, the inventor finds that the current city file establishment mostly takes face information as a main part, and the personal file information is single, so that much time is needed to search for corresponding personnel (such as a public security system searching personnel) through the file.
Disclosure of Invention
In order to facilitate searching for corresponding personnel through files, the application provides a smart city file information integration method, a smart city file information integration system and a smart city file information integration terminal.
In a first aspect, the present application provides a smart city archive information integration method, which adopts the following technical scheme:
a smart city file information integration method includes:
after entering an information acquisition area and being authorized by personnel, acquiring a first image of a driver on a road;
extracting current face information in the first image;
judging whether the current face information is matched with face information prestored in an archive information base or not, and if so, extracting vehicle information in the first image; if not, storing the current face information in the archive information base;
associating the vehicle information with the current face information and generating first file information;
after the step of judging that the current face information is matched with the face information, the method further comprises the following steps:
acquiring a second image of the driver after the driver gets off the vehicle;
extracting gait features in the second image based on a gait recognition algorithm;
and associating the gait features with the first file information and generating second file information.
By adopting the technical scheme, initially, only face information exists in the file information base, a first image of a driver on a road is obtained, current face information in the first image is extracted, then the current face information is compared with face information prestored in the file information base, if the current face information is matched with the face information, vehicle information in the first image is extracted, and the vehicle information is associated with the current face information to generate first file information; if the two are not matched, the current face information is stored in the archive information base;
and the extracted gait features of the driver are associated with the first file information, so that the file information of the driver is enriched.
Because the person corresponding to the file can be searched by means of the face information and the vehicle information and the gait characteristics, the person corresponding to the file can be conveniently searched by means of the file.
Optionally, the information integration method further includes:
when the driver passes through the access control system for the first time, calling the access control information of the driver, which is prestored in the access control system;
and associating the access control information with the second file information, and generating third file information.
Through adopting above-mentioned technical scheme, through the entrance guard's information that will call out and second file information carry out the relevance to this navigating mate's archives information has further been enriched.
Optionally, the information integration method further includes:
acquiring a position starting point and a position ending point of the driver;
generating a movement track based on the position starting point, the position ending point and a map;
and associating the moving track with the third file information to generate fourth file information.
By adopting the technical scheme, the movement track of the driver is associated with the third file information, so that the file information of the driver is further enriched.
Optionally, the information integration method further includes:
when a person is in the copilot, acquiring an image of the person in the copilot;
extracting the copilot face information in the personnel image;
judging whether the copilot face information is matched with the face information in the file information base or not, if so, associating the copilot face information with the fourth file information, and if not, storing the copilot face information in the file information base;
and judging whether the secondary driving face information is matched with the secondary driving face information at the last time, and if not, updating the secondary driving face information related to the fourth file information.
By adopting the technical scheme, the copilot face information is associated with the fourth file information, so that the file information of the driver is further enriched.
Optionally, the information integration method may further include:
and sending the public information in the driver's archive information to an adjacent household based on the access control information.
By adopting the technical scheme, the adjacent residents can know each other more, and the personnel can be further conveniently searched.
In a second aspect, the application provides a smart city archive information integration system, which adopts the following technical scheme:
a smart city file information integration system, comprising:
the first image acquisition module is used for acquiring a first image of a driver on a road after entering the information acquisition area and being authorized by the driver;
the face information extraction module is used for extracting the current face information in the first image;
the judging module is used for judging whether the current face information is matched with face information prestored in an archive information base; if not, storing the current face information in the archive information base;
the vehicle information extraction module is used for extracting the vehicle information in the first image when the current face information is matched with the face information;
the archive information generation module is used for associating the vehicle information with the current face information and generating first archive information;
the second image acquisition module is used for acquiring a second image of the driver after the driver gets off the vehicle;
the gait extraction module is used for extracting gait features in the second image based on a gait recognition algorithm;
the archive information generation module is further used for associating the gait features with the first archive information and generating second archive information.
By adopting the technical scheme, initially, only face information exists in the file information base, a first image of a driver on a road is obtained through the first image obtaining module, then current face information in the first image is extracted through the face information extracting module, then whether the current face information is matched with face information prestored in the file information base or not is judged through the judging module, if yes, vehicle information in the first image is extracted through the vehicle information extracting module, and then the file information generating module associates the vehicle information with the current face information, so that first file information is generated; if the two are not matched, the current face information is stored in the archive information base;
and the extracted gait features of the driver are associated with the first file information, so that the file information of the driver is enriched.
Because the person corresponding to the archive can be searched by means of the face information and the vehicle information and the gait characteristics, the person corresponding to the archive can be conveniently searched.
In a third aspect, the present application provides a terminal, which adopts the following technical scheme:
a terminal, comprising:
the memory stores a smart city file information integration program;
and the processor is used for executing the smart city file information integration program stored in the memory so as to realize the steps of the smart city file information integration method.
In a fourth aspect, the present application provides a computer-readable storage medium, which adopts the following technical solutions:
a computer readable storage medium storing a computer program capable of being loaded by a processor and executing the above smart city profile information integration method.
In summary, the present application has at least the following beneficial effects:
1. through associating vehicle information and gait feature with face information to when making the personnel that look for archives correspondence, not only can rely on face information, consequently can be convenient for quick the personnel that find this archives correspondence.
2. Based on the access control information, the public information in the file information of one person is sent to the adjacent residents, so that the adjacent residents can know each other more, and the person can be further conveniently searched.
Drawings
FIG. 1 is a block diagram of a process for generating first profile information according to an embodiment of the method of the present application;
FIG. 2 is a block diagram of a process for generating second file information according to an embodiment of the method of the present application;
FIG. 3 is a block diagram of a process for generating third file information according to an embodiment of the method of the present application;
FIG. 4 is a block diagram of a process for generating fourth file information according to an embodiment of the method of the present application;
FIG. 5 is a block flow diagram of another implementation of a method embodiment of the present application;
FIG. 6 is a block flow diagram of another implementation of a method embodiment of the present application;
FIG. 7 is a block diagram of an embodiment of the system of the present application;
fig. 8 is a block diagram of another embodiment of the system of the present application.
Description of reference numerals: 101. a first image acquisition module; 102. a face information extraction module; 103. a judgment module; 104. a vehicle information extraction module; 105. a file information generation module; 106. a second image acquisition module; 107. a gait extraction module; 108. an access control information calling module; 109. a position information acquisition module; 111. a moving track generating module; 112. a third image acquisition module; 113. an update module; 114. a public information sending module; 115. a comparison module; 116. and the vehicle owner information calling module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to fig. 1 to 8 in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the application discloses a smart city file information fusion method. As an embodiment of the method, referring to fig. 1, the method may include steps of S110 to S140:
s110, acquiring a first image of a driver on a road after entering an information acquisition area and being authorized by a person;
specifically, the information acquisition area can be a community, an urban area and the like, for example, a certain urban area in a certain city is provided with urban area notice boards at the boundaries of adjacent urban areas, roadblocks similar to highway toll collection are arranged at the urban area notice boards, when a vehicle passes by, a manual or voice system asks people on the vehicle whether to allow a video system in the area to acquire information of the people on the vehicle, and under the condition of permission, the information can be acquired in real time through a camera or a snapshot machine on the road; the person refers to a person on the vehicle.
S120, extracting the current face information in the first image;
s130, judging whether the current face information is matched with face information prestored in an archive information base, and if so, extracting vehicle information in the first image; if not, storing the current face information in an archive information base;
the vehicle information includes basic information such as license plate number, color and model.
And S140, associating the vehicle information with the current face information and generating first file information.
Referring to fig. 2, after performing step S140, the steps of S150-S170 may also be performed:
s150, acquiring a second image of the driver after the driver gets off the vehicle;
in particular, real-time acquisition may be performed by a camera on the road.
S160, extracting gait features in the second image based on a gait recognition algorithm;
wherein, the gait recognition algorithm can be a two-dimensional gait recognition algorithm.
And S170, associating the gait features with the first file information, and generating second file information.
Referring to fig. 3, as another embodiment of the information integrating method, the method may further include the steps of S210-S220:
s210, when a driver passes through the access control system for the first time, calling access control information of the driver, which is prestored in the access control system;
the entrance guard information includes the basic information such as the name of a community, the name of an office building, the name of a company, the number of a unit, the number of a room, the number of a floor and the like.
And S220, associating the access control information with the second file information, and generating third file information.
In addition, the public information in the driver's archive information can be sent to the adjacent residents based on the access control information. The archive information comprises third archive information and fourth archive information in the application; the public information includes non-privacy information such as the name, unit, and sex of the driver.
For example, according to the acquired entrance guard information, if a person a lives in an X cell m unit y room, the public information of the person a may be sent to a mobile terminal (e.g., a smartphone, a smart wearable device, etc.) of a resident in a room adjacent to the y room.
Referring to fig. 4, as another embodiment of the information integration method, the method may include the steps of S310-S330:
s310, acquiring a position starting point and a position ending point of a driver;
specifically, the driver's position departure point and position end point can be located according to the GPS of the positioning system on the driver moving terminal or the vehicle.
S320, generating a movement track based on the position starting point, the position ending point and the map;
the map may be a high-level map, a satellite map, or the like. After the position starting point and the position ending point of the driver are determined, the road and the position where the camera acquiring the images of the driver is located are combined, and therefore a moving track is generated. In addition, after the driver drives the vehicle to reach the position end point, the driver may call the video data in the automobile data recorder to obtain the movement track.
It should be noted that, when data is called, the data is called from the cloud.
S330, associating the movement track with the third file information to generate fourth file information.
When the similarity between the next generated movement track and the last generated movement track is more than 90%, performing a gear-closing process, namely, automatically covering the next generated movement track with the last generated movement track.
Referring to fig. 5, as another embodiment of the information integrating method, the method may include the steps of S410-S440:
s410, when a person is in the copilot, acquiring an image of the person in the copilot;
in particular, the acquisition may be performed by a camera or a snapshot on the road.
S420, extracting the copilot face information in the personnel image;
s430, judging whether the copilot face information is matched with the face information in the file information base, if so, associating the copilot face information with fourth file information, and if not, storing the copilot face information in the file information base;
and S440, judging whether the next copilot face information is matched with the previous copilot face information, and if not, updating the copilot face information related to the fourth file information.
The steps performed before step S410 are all performed in the case of no person in the copilot.
In addition, referring to fig. 6, after performing step S140, steps S141-S142 may also be performed:
s141, comparing the next extracted vehicle information with the last extracted vehicle information under the condition that the obtained next face information of the driver is the same as the last face information, and calling the owner information if the next extracted vehicle information is different from the last extracted vehicle information;
the system comprises a license plate number, a vehicle management center and a vehicle management center, wherein the vehicle owner information is called and can be in system butt joint with the vehicle management center, then the license plate number is compared with the registered license plate number in the vehicle management center, and when the license plate number is matched with the registered license plate number, the vehicle owner information under the license plate number is automatically called. The owner information may be face information.
S142, judging whether the vehicle owner information is matched with the information of the driver, and if so, updating the first file information; if not, the next extracted vehicle information is associated with the first file information, and prompt information is sent to the vehicle owner.
The information of the driver can be face information, the face information of the vehicle owner is compared with the face information of the driver, whether the face information of the vehicle owner is matched with the face information of the driver is judged, if yes, the vehicle belongs to the driver, and therefore the vehicle information is replaced by the related vehicle information before; if not, the vehicle does not belong to the driver, so that the vehicle information needs to be associated with the first file information, and meanwhile, prompt information is sent to the vehicle owner, so that whether the vehicle driven by the driver is allowed by the vehicle owner can be known.
It should be noted that steps S110 to S140, steps S150 to S170, steps S210 to S220, steps S310 to S330, and steps S410 to S440 are performed before steps S141 to S142.
The implementation principle of the embodiment of the application is as follows:
acquiring a first image of a driver on a road, extracting current face information in the first image, comparing the current face information with face information prestored in a file information base, if the current face information is matched with the face information prestored in the file information base, extracting vehicle information in the first image, and associating the vehicle information with the current face information to generate first file information; then, after the driver gets off the vehicle, a second image of the driver is obtained, then the gait feature in the second image is extracted based on a gait recognition algorithm, and the gait feature is associated with the first file information, so that second file information is generated; the access control information of the driver prestored in the access control system can be called, and is associated with the second file information, so that third file information is generated; then, a position starting point and a position ending point of the driver are obtained, a moving track is generated based on the position starting point, the position ending point and the map, and the moving track is associated with third file information, so that fourth file information is generated; in addition, when the copilot has a person, the image of the person in the copilot can be obtained, the copilot face information in the image of the person is extracted, the copilot face information is matched with the face information, and if the matching is successful, the copilot face information is associated with the fourth file information.
Based on the above method embodiment, another embodiment of the present application provides a smart city file information integration system. Referring to fig. 7, as an embodiment of the information integrating system, the system may include:
the first image acquisition module 101 is used for acquiring a first image of a driver on a road after entering an information acquisition area and being authorized by a person;
the face information extraction module 102 is configured to extract current face information in the first image;
the judging module 103 is used for judging whether the current face information is matched with the face information prestored in the archive information base; if not, storing the current face information in an archive information base;
the vehicle information extraction module 104 is configured to extract vehicle information in the first image when the current face information matches the face information;
and the archive information generation module 105 is used for associating the vehicle information with the current face information and generating first archive information.
The information integrating system may further include:
the second image acquisition module 106 is used for acquiring a second image of the driver after the driver gets off the vehicle;
a gait extraction module 107, which extracts gait features in the second image based on a gait recognition algorithm;
the profile information generating module 105 is configured to associate the gait characteristics with the first profile information and generate second profile information.
The information integration system may further include:
the access control information calling module 108 is used for calling the access control information of the driver prestored in the access control system;
the archive information generation module 105 is configured to associate the access control information with the second archive information, and generate third archive information.
The information integration system may further include:
a position information obtaining module 109, configured to obtain a position departure point and a position ending point of a driver;
a movement trajectory generation module 111 that generates a movement trajectory based on the position departure point, the position end point, and the map;
the profile information generating module 105 is configured to associate the moving track with the third profile information and generate a fourth profile information.
The information integration system may further include:
a third image obtaining module 112, configured to obtain an image of a person in the co-driver when the person is in the co-driver;
the face information extraction module 102 is used for extracting the copilot face information in the personnel image;
the judging module 103 is used for judging whether the copilot face information is matched with face information prestored in the archive information base; if not, storing the current face information in an archive information base;
the archive information generation module 105 is configured to associate the copilot face information with the fourth archive information when the copilot face information matches the face information;
the judging module 103 is further configured to judge whether the next copilot face information matches the previous copilot face information;
and the updating module 113 is used for updating the copilot face information associated with the fourth file information when the next copilot face information is not matched with the previous copilot face information.
The information integration system may further include:
and the public information sending module 114 is used for sending the public information in the driver's archive information to the mobile terminal of the adjacent resident based on the access control information.
Referring to fig. 8, as another embodiment of the information integrating system, the information integrating system may further include:
a comparison module 115, configured to compare the next extracted vehicle information with the last extracted vehicle information when the obtained next face information of the driver is the same as the last face information;
the vehicle owner information calling module 116 is used for calling vehicle owner information from the vehicle management department system when the comparison is different;
the judging module 103 is configured to judge whether the vehicle owner information matches the driver's information, if so, the updating module 113 updates the first file information, and if not, the file information generating module 105 associates the next extracted vehicle information with the first file information and sends a prompt message to the vehicle owner moving end.
It should be noted that the first image acquisition module 101, the second image acquisition module 106, and the third image acquisition module 112 may be the same image acquisition module, such as a camera installed on a road.
The implementation principle of the embodiment is as follows:
a first image of a driver on a road is acquired through a first image acquisition module 101, current face information in the first image is extracted through a face information extraction module 102, then a judgment module 103 judges whether the current face information is matched with face information prestored in a file information base or not, if yes, a vehicle information extraction module 104 extracts vehicle information in the first image, and a file information generation module 105 associates the vehicle information with the current face information to generate first file information; then after the driver gets off the vehicle, the second image acquisition module 106 acquires a second image of the driver, the gait extraction module 107 extracts gait features in the second image based on a gait recognition algorithm, and the archive information generation module 105 associates the gait features with the first archive information to generate second archive information; the access control information calling module 108 may also call access control information of a driver pre-stored in the access control system, and the profile information generating module 105 associates the access control information with the second profile information, thereby generating third profile information; the position information acquiring module 109 acquires a position starting point and a position ending point of the driver, the movement track generating module 111 generates a movement track based on the position starting point, the position ending point and the map, and the archive information generating module 105 associates the movement track with the third archive information to generate fourth archive information; in addition, when there is a person in the copilot, the third image obtaining module 112 may obtain a person image of the copilot, the face information extracting module 102 extracts the copilot face information in the person image, and the determining module 103 determines whether the copilot face information matches the face information, and if yes, the profile information generating module 105 associates the copilot face information with the fourth profile information.
The third embodiment of the present application further provides a terminal, where the terminal may be a client such as a computer and a smart phone, the system is built in the terminal, and the terminal may include: a memory and a processor;
the memory is used for storing the intelligent city file information integration program;
the processor is used for executing the smart city file information integration program stored in the memory so as to realize the steps of the smart city file information integration method.
The memory may be in communication connection with the processor through a communication bus, which may be an address bus, a data bus, a control bus, or the like.
Additionally, the memory may include Random Access Memory (RAM) and may also include non-volatile memory (NVM), such as at least one disk memory.
And the processor may be a general purpose processor including a Central Processing Unit (CPU), a Network Processor (NP), etc.; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, etc.
A fourth embodiment of the present application provides a computer-readable storage medium, which stores a computer program that can be loaded by a processor and execute the above-mentioned smart city profile information integration method.
Computer-readable storage media can be any available media that can be accessed by a computer or a data storage device, such as a server, data center, etc., that includes one or more available media. The available media may be magnetic media (e.g., floppy disk, hard disk, magnetic tape), optical media (e.g., DVD), or semiconductor media (e.g., solid state disk), among others.
The foregoing is a preferred embodiment of the present application and is not intended to limit the scope of the present application in any way, and any features disclosed in this specification (including the abstract and drawings) may be replaced by alternative features serving equivalent or similar purposes, unless expressly stated otherwise. That is, unless expressly stated otherwise, each feature is only an example of a generic series of equivalent or similar features.

Claims (8)

1. A smart city file information integration method is characterized by comprising the following steps:
after entering an information acquisition area and being authorized by personnel, acquiring a first image of a driver on a road;
extracting current face information in the first image;
judging whether the current face information is matched with face information prestored in an archive information base or not, and if so, extracting vehicle information in the first image; if not, storing the current face information in the archive information base;
associating the vehicle information with the current face information and generating first file information;
after the step of judging that the current face information is matched with the face information, the method further comprises the following steps:
acquiring a second image of the driver after the driver gets off the vehicle;
extracting gait features in the second image based on a gait recognition algorithm;
and associating the gait features with the first file information and generating second file information.
2. The intelligent city file information integration method as claimed in claim 1, further comprising:
when the driver passes through the access control system for the first time, calling the access control information of the driver, which is prestored in the access control system;
and associating the access control information with the second file information, and generating third file information.
3. The method as claimed in claim 2, wherein the information integration method further comprises:
acquiring a position starting point and a position ending point of the driver;
generating a movement track based on the position starting point, the position ending point and a map;
and associating the movement track with the third file information to generate fourth file information.
4. The method as claimed in claim 3, wherein the information integration method further comprises:
acquiring a passenger image of the copilot;
extracting the information of the copilot face in the personnel image;
judging whether the copilot face information is matched with the face information in the file information base, if so, associating the copilot face information with the fourth file information, and if not, storing the copilot face information in the file information base;
and judging whether the current copilot face information is matched with the secondary copilot face information at the next time, and if not, updating the secondary pilotlot face information associated with the fourth file information.
5. The smart city profile information integration method as claimed in any one of claims 2 to 4, wherein the information integration method further comprises:
and sending the public information in the driver's archive information to an adjacent household based on the access control information.
6. A smart city file information integration system, comprising:
the first image acquisition module (101) is used for acquiring a first image of a driver on a road after entering the information acquisition area and being authorized by the driver;
a face information extraction module (102) for extracting current face information in the first image;
the judging module (103) is used for judging whether the current face information is matched with face information prestored in an archive information base; if not, storing the current face information in the archive information base;
a vehicle information extraction module (104) for extracting vehicle information in the first image when the current face information matches the face information;
the archive information generation module (105) is used for associating the vehicle information with the current face information and generating first archive information;
a second image acquisition module (106) for acquiring a second image of the driver after the driver gets off the vehicle;
a gait extraction module (107) for extracting gait features in the second image based on a gait recognition algorithm;
the profile information generation module (105) is configured to associate the gait features with the first profile information and generate second profile information.
7. A terminal, comprising:
the memory stores a smart city file information integration program;
a processor for executing the smart city file information integration program stored in the memory to realize the steps of the smart city file information integration method as claimed in any one of claims 1-5.
8. A computer-readable storage medium storing a computer program capable of being loaded by a processor and executing the smart city profile information integration method according to any one of claims 1 to 5.
CN202211062446.8A 2022-08-22 2022-09-01 Smart city file information integration method, system and terminal Pending CN115147911A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2022110023659 2022-08-22
CN202211002365 2022-08-22

Publications (1)

Publication Number Publication Date
CN115147911A true CN115147911A (en) 2022-10-04

Family

ID=83415236

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211062446.8A Pending CN115147911A (en) 2022-08-22 2022-09-01 Smart city file information integration method, system and terminal

Country Status (1)

Country Link
CN (1) CN115147911A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150205997A1 (en) * 2012-06-25 2015-07-23 Nokia Corporation Method, apparatus and computer program product for human-face features extraction
CN109033451A (en) * 2018-08-21 2018-12-18 北京深瞐科技有限公司 People's vehicle dynamic file analysis method and device
CN110414459A (en) * 2019-08-02 2019-11-05 中星智能系统技术有限公司 Establish the associated method and device of people's vehicle
CN111353369A (en) * 2019-10-16 2020-06-30 智慧互通科技有限公司 Application method and system of high-order video of urban roadside parking in assisting criminal investigation
CN111930868A (en) * 2020-08-10 2020-11-13 大连源动力科技有限公司 Big data behavior trajectory analysis method based on multi-dimensional data acquisition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150205997A1 (en) * 2012-06-25 2015-07-23 Nokia Corporation Method, apparatus and computer program product for human-face features extraction
CN109033451A (en) * 2018-08-21 2018-12-18 北京深瞐科技有限公司 People's vehicle dynamic file analysis method and device
CN110414459A (en) * 2019-08-02 2019-11-05 中星智能系统技术有限公司 Establish the associated method and device of people's vehicle
CN111353369A (en) * 2019-10-16 2020-06-30 智慧互通科技有限公司 Application method and system of high-order video of urban roadside parking in assisting criminal investigation
CN111930868A (en) * 2020-08-10 2020-11-13 大连源动力科技有限公司 Big data behavior trajectory analysis method based on multi-dimensional data acquisition

Similar Documents

Publication Publication Date Title
CN110135852B (en) Riding payment method, riding payment system, payment acceptance equipment and server
CN108280524B (en) System and method for identifying vehicles and generating reservation information
CN109804367A (en) Use the distributed video storage and search of edge calculations
CN108154708A (en) Parking lot management method, server, video camera and terminal device
US20180217607A1 (en) Object recognition in autonomous vehicles
CN107392178B (en) Monitoring method and system
WO2015117528A1 (en) Car driving record processing method and system
JP2020137070A5 (en)
CN107705576B (en) Vehicle fake plate detection method, server and storage medium
US20230177954A1 (en) Systems and methods for identifying vehicles using wireless device identifiers
JP2022122981A (en) Method and apparatus for connecting through on-vehicle bluetooth, electronic device, and storage medium
US11657623B2 (en) Traffic information providing method and device, and computer program stored in medium in order to execute method
CN109670003A (en) Electronic map parking lot update method, device and equipment
CN107729867A (en) Identify the method and system illegally driven
CN112533150A (en) Bluetooth positioning vehicle searching method, device, equipment and storage medium
CN115909537A (en) Vehicle data collection system and method of use
KR101066081B1 (en) Smart information detection system mounted on the vehicle and smart information detection method using the same
JP2019079330A (en) Stolen vehicle tracking system
JP2016130934A (en) Traffic violation management system and traffic violation management method
CN112637548B (en) Information association early warning method and device based on camera
CN115147911A (en) Smart city file information integration method, system and terminal
CN112224170A (en) Vehicle control system and method
CN111639244A (en) Public security multidimensional data fusion method and fusion system
CN108320030B (en) Automobile intelligent service system
CN107588781B (en) Cell navigation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination