WO2020224116A1 - 形成移动轨迹的方法、装置、计算机设备及存储介质 - Google Patents

形成移动轨迹的方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2020224116A1
WO2020224116A1 PCT/CN2019/103180 CN2019103180W WO2020224116A1 WO 2020224116 A1 WO2020224116 A1 WO 2020224116A1 CN 2019103180 W CN2019103180 W CN 2019103180W WO 2020224116 A1 WO2020224116 A1 WO 2020224116A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image data
time
body shape
feature set
Prior art date
Application number
PCT/CN2019/103180
Other languages
English (en)
French (fr)
Inventor
王保军
江腾飞
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020224116A1 publication Critical patent/WO2020224116A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Definitions

  • This application belongs to the field of artificial intelligence technology, and relates to methods, devices, computer equipment, and storage media for forming movement tracks.
  • the urban closed-circuit television monitoring system and the road checkpoint play a substantial role in reducing urban crime, increasing the rate of urban crime detection, and finding missing persons.
  • the formation of the movement trajectory of the target person through the urban closed-circuit television monitoring system and the road bayonet is generally achieved manually. For example, by manually identifying the target person and drawing the movement track of the target person.
  • the embodiment of the application discloses a method, a device, a computer device, and a storage medium for forming a movement track, aiming to accurately and quickly form a movement track of a target person.
  • Some embodiments of the present application disclose a method for forming a movement track, including:
  • An embodiment of the present application discloses a device for forming a movement track, including:
  • the profile feature sample acquisition module is used to acquire the profile feature sample of the target person; the image feature set extraction module is used to extract the image feature set from the profile feature sample; the comparison module is used in the road network The extracted image data is compared with the image feature set within the image data comparison range; the target person determination module is used to determine the target when the similarity between the image data and the image feature set exceeds a threshold The appearance position, disappearance position, appearance time, and disappearance time of the person within the comparison range of the image data; the movement track marking module is used to mark the appearance position, the disappearance position, the appearance time and the disappearance time Marking into the road network forms the movement track of the target person.
  • An embodiment of the present application discloses a computer device including a memory and a processor.
  • the memory stores computer-readable instructions.
  • the processor executes the computer-readable instructions, the method for forming a movement track is implemented. step:
  • An embodiment of the present application discloses one or more non-volatile readable storage media, the non-volatile readable storage medium stores computer readable instructions, and the computer readable instructions are used by a processor When executed, the processor is caused to execute the steps of the method for forming a movement track.
  • FIG. 1 is a schematic diagram of a method for forming a movement track according to an embodiment of the application
  • FIG. 2 is a schematic diagram of the steps of extracting image data and comparing the image feature set within the image data comparison range in an embodiment of the application;
  • 3 is a schematic diagram of the steps of determining the appearance position, disappearance position, appearance time, and disappearance time of the target person within the image data comparison range in an embodiment of the application;
  • FIG. 4 is a schematic diagram of the steps of forming the image data whose similarity exceeds the threshold value into an image sequence according to a time course in an embodiment of the application;
  • FIG. 5 is a schematic diagram of the steps of fusing the body shape image sequence and the facial image sequence to obtain the image sequence in an embodiment of the application;
  • FIG. 6 is a schematic diagram of the steps of updating the comparison range of the image data according to the appearance position, the disappearance position, the appearance time, and the disappearance time in an embodiment of the application;
  • FIG. 7 is an example diagram of the device for forming a movement track in an embodiment of the application.
  • FIG. 8 is an example diagram of the comparison module 30 in an embodiment of the application.
  • FIG. 9 is an example diagram of the target person determining module 40 in an embodiment of this application.
  • FIG. 10 is an example diagram of the time determination sub-module 41 in an embodiment of this application.
  • FIG. 11 is an example diagram of the comparison range update module 60 in an embodiment of this application.
  • FIG. 12 is a block diagram of the basic structure of the computer device 100 in an embodiment of the application.
  • An embodiment of the present application discloses a method for forming a movement track, which is used to form a movement track of a target person.
  • FIG. 1 it is a schematic diagram of the method of forming a movement track described in an embodiment of the application.
  • the method for forming a movement track includes:
  • the profile feature sample of the target person can be obtained in the following manner. For example, when the target person is a missing person, the relatives of the target person can provide the police with photos, videos, etc. of the target person, and then the police can make the photos, videos, etc. of the target person into the
  • the profile feature samples are stored in a storage medium, and the profile feature samples of the target person are obtained from the storage medium.
  • the physical feature samples include the physical feature samples and facial feature samples of the target person.
  • the body shape feature samples include: image features of walking posture, ratio features of height to body width, and the like.
  • the facial feature samples include facial image features such as eyebrows, eyes, nose, mouth, etc., and proportional features of each face composition.
  • the profile feature sample may include: The image characteristics of the walking posture, the ratio characteristics of the height to the body width at multiple angles, the image characteristics of the face at multiple angles, and the proportional characteristics of each face at multiple angles.
  • the extraction of the image feature set can be performed in conjunction with image data in the image data comparison range. Specifically, determining the topographic feature content contained in the image data, and extracting image features containing the same topographic feature content from the topographic feature sample according to the topographic feature content to form the image feature set.
  • the image data includes body shape image data and face image data
  • the image feature set includes a body shape feature set and a face feature set.
  • the body shape image data and the face image data may be obtained in the following manner: performing body shape detection and human face detection on images collected from the road network, and obtaining the body shape image data and the face Image data, and establish a mapping table between the body image data and the face image data according to the time course.
  • the image data comparison range is a collection of body image data and facial image data of the selected at least one camera device in a period of time.
  • the camera device includes, but is not limited to, a road bayonet, a camera in an urban closed-circuit television monitoring system, and the like.
  • FIG. 2 it is a schematic diagram of the steps of extracting image data and comparing the image feature set within the image data comparison range in an embodiment of this application.
  • the step of comparing the extracted image data with the image feature set in the image data comparison range includes:
  • S31 Use a body shape image recognition neural network to compare each frame of body shape images in the body shape image data with the body shape feature set.
  • S32 Calculate the first similarity between the body shape image of each frame and the body shape feature set.
  • S34 Calculate a second degree of similarity between the facial image of each frame and the facial feature set.
  • the method for calculating the first similarity between the body shape image of each frame and the body shape feature set includes: calculating the first similarity degree using an image histogram, and calculating the first similarity degree using an average hash algorithm or a perceptual hash algorithm A degree of similarity, calculating the first degree of similarity based on mathematical matrix decomposition, calculating the first degree of similarity based on image feature points, and the like.
  • the second degree of similarity between the facial image of each frame and the facial feature set can also be calculated by the above method.
  • the step of extracting image data within the image data comparison range and comparing with the image feature set further includes:
  • the recognition accuracy of the body shape image of the target person can be further improved by the body shape image recognition neural network.
  • the image recognition enhancement training can further improve the recognition accuracy of the body image recognition neural network on the face image of the target person.
  • the body image recognition neural network and the facial image recognition neural network can be realized by using a convolutional neural network.
  • the training process of the body image recognition neural network and the facial image recognition neural network includes two stages. The first stage is the initial training stage, and the second stage is the image recognition enhancement training stage. In the initial training stage, the body image of the target person is manually selected and input into the body image recognition neural network for training, and the face image of the target person is manually selected for training.
  • the body shape images whose first similarity reaches the first threshold are provided to the body shape image recognition neural network for image recognition enhancement training, and the body shape image recognition neural network is expanded Training samples of, help to improve the recognition accuracy of the body image recognition neural network; provide the facial image with the second similarity to the second threshold to the facial image recognition neural network for image recognition Enhanced training expands the training samples of the facial image recognition neural network, which is beneficial to improve the recognition accuracy of the facial image recognition neural network.
  • the first threshold is slightly larger than the second threshold, so as to improve the reliability of the object shape recognition of the target person.
  • this is a schematic diagram of the steps for determining the appearance position, disappearance position, appearance time, and disappearance time of the target person within the image data comparison range in an embodiment of the application.
  • the step of determining the appearance position, disappearance position, appearance time, and disappearance time of the target person within the image data comparison range includes:
  • S41 The image data whose similarity exceeds the threshold value is formed into an image sequence according to a time course; the earliest time point in the image sequence is the appearance time, and the latest time point in the image sequence Is the disappearance time.
  • FIG. 4 it is a schematic diagram of the steps of forming the image data whose similarity exceeds the threshold value into an image sequence according to a time course in an embodiment of the application.
  • the step of forming the image data whose similarity degree exceeds the threshold value into an image sequence according to a time course includes:
  • S411 Form the body shape image data whose first similarity degree reaches the first threshold value into a body shape image sequence according to a time course.
  • S412 Form the facial image data with the second similarity reaching the second threshold into a facial image sequence according to a time course.
  • S413 Fuse the body shape image sequence and the face image sequence to obtain the image sequence.
  • the step of fusing the body shape image sequence and the facial image sequence to obtain the image sequence includes:
  • S413a Fuse each frame of the body shape image in the body shape image sequence with each frame of the face image in the face image sequence according to the time progress.
  • S42 Apply a road network recognition neural network to recognize the first position of the target person in the road network in the image data corresponding to the appearance time; take the first position as the appearance position.
  • the road network recognition neural network first recognizes the environment where the target person is located in the image, then judges and records the location of the target person in combination with the road network, and maps the change of the target person's position to the target person. In the road network, to obtain the position change of the target person in the road network.
  • the orientation corresponding to the appearance time is the first orientation
  • the orientation corresponding to the disappearance time is the second orientation.
  • the location of the target person can be reflected in the image, so the location of the target person in the road network can be obtained indirectly by identifying the location of the target person in the image.
  • the image includes but is not limited to the face image and the body shape image.
  • execution order of S42 and S43 is not prioritized, the execution order of S411 and S412 is not prioritized, and the execution order of S413b and S413c is not prioritized.
  • S5 Mark the appearance position, the disappearance position, the appearance time, and the disappearance time into a road network to form a movement track of the target person.
  • the method for forming a movement track further includes the steps:
  • S6 Update the image data comparison range according to the appearance direction, the disappearance position, the appearance time, and the disappearance time.
  • the step of updating the comparison range of the image data according to the appearance position, the disappearance position, the appearance time, and the disappearance time includes:
  • S61 Search for the at least one camera device in the appearing position and the at least one camera device in the disappearing position in combination with the road network.
  • S62 Use the appearance time as the first time endpoint to obtain reverse image data from the image data storage address of the at least one camera device in the appearance position in a reverse time process.
  • S63 Use the disappearance time as the second time endpoint to acquire forward image data from the image data storage address of the at least one camera device in the disappearance direction in a chronological process.
  • the acquisition of reverse image data by the address reverse time process refers to acquiring the reverse image data at a time point earlier than the first time endpoint. For example, when the appearance time is 9:00am, take 9:00am as the first time endpoint, and obtain the reverse image data at 8:59am from the image data storage address of at least one camera device in the appearance position .
  • the acquiring forward image data in a time-wise process refers to acquiring the forward image data at a time point later than the second time endpoint. For example, when the disappearing time is 9:05am, take 9:05am as the second time endpoint, and obtain the forward direction of 9:06am from the image data storage address of at least one camera device in the disappearing position Image data.
  • the disappearance orientation, the appearance time, and the disappearance time continue to execute S3, that is, in The extracted image data within the updated image data comparison range is compared with the image feature set.
  • S3, S4, S5, and S6 are executed cyclically, and the new appearance position, disappearance position, appearance time, and disappearance time of the target person are continuously marked into the road network to form a continuity of the target person The movement trajectory.
  • the method for forming a movement track extracts an image feature set from the acquired shape feature samples, and compares the image data with the image feature set within the range of image data comparison , And then identify the target person within the image data comparison range.
  • identify the target person within the image data comparison range When the similarity between the image data and the image feature set exceeds a threshold, determine the appearance, disappearance, appearance time, and disappearance time of the target person within the comparison range of the image data. Then, the appearance direction, the disappearance position, the appearance time, and the disappearance time are marked on the road network to form the movement track of the target person.
  • the method for forming a movement track updates the image data comparison range according to the appearance position, the disappearance position, the appearance time, and the disappearance time, and then the appearance position and the disappearance time in the road network are updated. Continue to identify the target person within the updated image data comparison range in the disappearance direction, and obtain the appearance position, disappearance position, appearance time, and disappearance time of the target person in the rest of the road network. Finally, the method of forming a movement track can obtain a complete and effective movement track of the target person in the road network.
  • the method of forming a movement track applies artificial intelligence technology of image recognition, can accurately and quickly generate the movement track of the target person in the road network, and can be applied to search for missing persons and analyze the travel path of criminals.
  • An embodiment of the application discloses a device for forming a movement track.
  • FIG. 7 is an example diagram of the device for forming a moving track in an embodiment of the application.
  • the device for forming a movement track includes:
  • the profile feature sample acquisition module 10 is used to acquire profile feature samples of the target person.
  • the image feature set extraction module 20 is used to extract an image feature set from the profile feature samples.
  • the comparison module 30 is used for comparing the extracted image data with the image feature set within the image data comparison range.
  • the target person determining module 40 is configured to determine the appearance, disappearance, and appearance time of the target person within the comparison range of the image data when the similarity between the image data and the image feature set exceeds a threshold And the disappearance time.
  • the movement track marking module 50 is configured to mark the appearance direction, the disappearance direction, the appearance time, and the disappearance time into a road network to form a movement track of the target person.
  • the comparison module 30 includes:
  • the body shape image recognition sub-module 31 is configured to apply a body shape image recognition neural network to compare each frame of body shape image in the body shape image data with the body shape feature set, and calculate the body shape image of each frame and the body shape feature The first similarity of the set.
  • the facial image recognition sub-module 32 is used to apply a facial image recognition neural network to compare each frame of facial image in the facial image data with the facial feature set, and calculate the facial image of each frame The second degree of similarity between the image and the facial feature set.
  • the comparison module 30 further includes an image recognition enhancement training sub-module 33, which is used when the first similarity between the body shape image and the body shape feature set reaches a first threshold , Providing the body shape image to the body shape image recognition neural network for image recognition enhancement training; when the second similarity between the facial image and the facial feature set reaches a second threshold, the The facial image provides the facial image recognition neural network for image recognition enhancement training.
  • the target person determining module 40 includes:
  • the time determining sub-module 41 is configured to form the image data whose similarity exceeds the threshold value into an image sequence according to the time course; taking the earliest time point in the image sequence as the appearance time, and taking the image sequence The latest point in time is the disappearance time.
  • the position determination sub-module 42 is configured to apply a road network recognition neural network to recognize the first position of the target person in the road network in the image data corresponding to the appearance time; take the first position as the target The direction of appearance.
  • the position determination submodule 42 is further configured to apply the road network recognition neural network to recognize the second position of the target person in the road network in the image data corresponding to the disappearance time;
  • the two directions are the disappearing directions.
  • the time determination submodule 41 includes:
  • the body shape image sequence unit 411 is configured to form a body shape image sequence according to the time course of the body shape image data with the first similarity reaching a first threshold.
  • the facial image sequence unit 412 is configured to form a facial image sequence according to the time course of the facial image data whose second similarity reaches a second threshold.
  • the fusion unit 413 is configured to fuse the body shape image sequence and the face image sequence to obtain the image sequence.
  • the fusion unit 413 fuses each frame of the body shape image in the body shape image sequence with each frame of the face image in the face image sequence according to the time course; When a node of the time course in the body shape image sequence is missing the body shape image, it is supplemented by the face image of the same node in the face image sequence; in the face image sequence, one said When the face image is missing from the node of the time course, it is supplemented by the body shape image of the same node in the body shape image sequence.
  • the device for forming a movement track further includes: a comparison range update module 60, configured to update the location according to the appearance position, the disappearance position, the appearance time, and the disappearance time.
  • the image data comparison range is not limited to the image data comparison range.
  • the comparison range update module 60 includes:
  • the camera searching submodule 61 is configured to search for the at least one camera device in the appearing position and the at least one camera device in the disappearing position in combination with the road network.
  • the reverse image data acquisition submodule 62 is configured to use the appearance time as the first time endpoint to obtain reverse image data from the image data storage address of the at least one camera device in the appearance position in a reverse time process.
  • the forward image data acquisition sub-module 63 is configured to acquire forward image data from the image data storage address of the at least one camera device in the disappearing direction with the disappearing time as the second time endpoint.
  • the image data update submodule 64 updates the image data comparison range with the reverse image data and the forward image data.
  • FIG. 12 is a block diagram of the basic structure of the computer device 100 in an embodiment of the application.
  • the computer device 100 includes a memory 101, a processor 102, and a network interface 103 that are communicatively connected to each other through a system bus. It should be pointed out that FIG. 12 only shows the computer device 100 with the components 101-103, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead.
  • the computer device here is a device that can automatically perform numerical calculation and/or information processing in accordance with pre-set or stored instructions. Its hardware includes but is not limited to microprocessors and application specific integrated circuits. (Application Specific Integrated Circuit, ASIC), Programmable Gate Array (Field-Programmable Gate Array, FPGA), Digital Processor (Digital Signal Processor, DSP), embedded devices, etc.
  • the computer device may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the computer device can interact with the user through a keyboard, a mouse, a remote control, a touch panel, or a voice control device.
  • the memory 101 includes at least one type of readable storage medium.
  • the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static Random access memory (SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disks, optical disks, etc.
  • the memory 101 may be an internal storage unit of the computer device 100, such as a hard disk or memory of the computer device 100.
  • the memory 101 may also be an external storage device of the computer device 100, such as a plug-in hard disk, a smart media card (SMC), and a secure digital device equipped on the computer device 100.
  • SMC smart media card
  • the memory 101 may also include both an internal storage unit of the computer device 100 and an external storage device thereof.
  • the memory 101 is generally used to store an operating system and various application software installed in the computer device 100, for example, the computer-readable instructions of the aforementioned method for forming a movement track.
  • the memory 101 can also be used to temporarily store various types of data that have been output or will be output.
  • the processor 102 may be a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, a microprocessor, or other data processing chips in some embodiments.
  • the processor 102 is generally used to control the overall operation of the computer device 100.
  • the processor 102 is configured to run computer-readable instructions or process data stored in the memory 101, for example, run the computer-readable instructions of the aforementioned method of forming a movement track.
  • the network interface 103 may include a wireless network interface or a wired network interface.
  • the network interface 103 is generally used to establish a communication connection between the computer device 100 and other electronic devices.
  • This application also provides another implementation manner, that is, one or more non-volatile readable storage media are provided, and the non-volatile readable storage medium stores a document information entry process, and the document information entry process can be Is executed by at least one processor, so that the at least one processor executes the steps of any one of the foregoing methods for forming a movement track.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

形成移动轨迹的方法、装置、计算机设备及存储介质。该方法包括:获取所述目标人物的形貌特征样本(S1);从所述形貌特征样本中提取图像特征集(S2);在路网的图像数据比对范围内提取图像数据与所述图像特征集进行比对(S3);当有所述图像数据与所述图像特征集的相似度超过阈值时,确定所述目标人物在所述图像数据比对范围内的出现方位、消失方位、出现时间以及消失时间(S4);将所述出现方位、所述消失方位、所述出现时间以及所述消失时间标注到所述路网中形成所述目标人物的移动轨迹(S5)。该方法能够快速地生成目标人物在所述路网中的移动轨迹。

Description

形成移动轨迹的方法、装置、计算机设备及存储介质
【交叉引用】
本申请以2019年5月7日提交的申请号为2019103775601,名称为“形成移动轨迹的方法、装置、计算机设备及存储介质”的中国发明专利申请为基础,并要求其优先权。
【技术领域】
本申请属于人工智能技术领域,涉及形成移动轨迹的方法、装置、计算机设备及存储介质。
【背景技术】
随着城市闭路电视监控系统和道路卡口的普及,目前广泛地实时获取路网中的图像数据已经成为可能。所述城市闭路电视监控系统和所述道路卡口对于降低城市犯罪、提高城市破案率、寻找失踪人员等有着实质性的作用。
现有的技术条件下,通过所述城市闭路电视监控系统和所述道路卡口来形成目标人物的移动轨迹一般通过人工来实现。例如,通过人工识别所述目标人物并绘制所述目标人物的移动轨迹。
【发明内容】
本申请实施例公开了形成移动轨迹的方法、装置、计算机设备及存储介质,旨在准确而快速地形成目标人物的移动轨迹。
本申请的一些实施例公开了一种形成移动轨迹的方法,包括:
获取所述目标人物的形貌特征样本;从所述形貌特征样本中提取图像特征集;在路网的图像数据比对范围内提取图像数据与所述图像特征集进行比对;当有所述图像数据与所述图像特征集的相似度超过阈值时,确定所述目标人物在所述图像数据比对范围内的出现方位、消失方位、出现时间以及消失时间;将所述出现方位、所述消失方位、所述出现时间以及所述消失时间标注到所述路网中形成所述目标人物的移动轨迹。
本申请的一实施例公开了一种形成移动轨迹的装置,包括:
形貌特征样本获取模块,用于获取所述目标人物的形貌特征样本;图像特 征集提取模块,用于从所述形貌特征样本中提取图像特征集;比对模块,用于在路网的图像数据比对范围内提取图像数据与所述图像特征集进行比对;目标人物确定模块,用于当有所述图像数据与所述图像特征集的相似度超过阈值时,确定所述目标人物在所述图像数据比对范围内的出现方位、消失方位、出现时间以及消失时间;移动轨迹标注模块,用于将所述出现方位、所述消失方位、所述出现时间以及所述消失时间标注到所述路网中形成所述目标人物的移动轨迹。
本申请的一实施例公开了一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述处理器执行所述计算机可读指令时实现上述形成移动轨迹的方法的步骤:
本申请的一实施例公开了一个或多个非易失性可读存储介质,所述非易失性可读存储介质上存储有计算机可读指令,所述计算机可读指令被一种处理器执行时,使得所述处理器执行上述形成移动轨迹的方法的步骤。
本申请的一个或多个实施例的细节在下面的附图和描述中提出,本申请的其他特征和优点将从说明书、附图以及权利要求变得明显。
【附图说明】
为了更清楚地说明本申请实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其它的附图。
图1为本申请的一实施例中所述形成移动轨迹的方法的示意图;
图2为本申请的一实施例中在图像数据比对范围内提取图像数据与所述图像特征集进行比对的步骤示意图;
图3为本申请的一实施例中确定所述目标人物在所述图像数据比对范围内的出现方位、消失方位、出现时间以及消失时间的步骤示意图;
图4为本申请的一实施例中将所述相似度超过所述阈值的所述图像数据按照时间进程形成图像序列的步骤示意图;
图5为本申请的一实施例中将所述体形图像序列与所述脸部图像序列进行融合得到所述图像序列的步骤示意图;
图6为本申请的一实施例中根据所述出现方位、所述消失方位、所述出现 时间以及所述消失时间更新所述图像数据比对范围的步骤示意图;
图7为本申请的一实施例中所述形成移动轨迹的装置的示例图;
图8为本申请的一实施例中所述比对模块30的示例图;
图9为本申请的一实施例中所述目标人物确定模块40的示例图;
图10为本申请的一实施例中所述时间确定子模块41的示例图;
图11为本申请的一实施例中所述比对范围更新模块60的示例图
图12为本申请的一实施例中计算机设备100基本结构框图。
【具体实施方式】
为了便于理解本申请,下面将参照相关附图对本申请进行更全面的描述。附图中给出了本申请的较佳实施例。但是,本申请可以以许多不同的形式来实现,并不限于本文所描述的实施例。相反地,提供这些实施例的目的是使对本申请的公开内容的理解更加透彻全面。
本申请的一实施例公开一种形成移动轨迹的方法,用于形成目标人物的移动轨迹。
参考图1,为本申请的一实施例中所述形成移动轨迹的方法的示意图。
如图1中所示意的,在本申请的实施例中,所述形成移动轨迹的方法包括:
S1:获取所述目标人物的形貌特征样本。
所述目标人物的所述形貌特征样本可以通过以下的方式获取。例如,所述目标人物为失踪人员时,可以由所述目标人物的亲人将所述目标人物的照片、视频等提供给警方,然后由警方将所述目标人物的照片、视频等制成所述形貌特征样本并存储在存储介质中,从所述存储介质中获取所述目标人物的所述形貌特征样本。
所述形貌特征样本包括所述目标人物的体形特征样本和脸部特征样本。所述体形特征样本包括:步履姿态的图像特征、身高与体宽的比值特征等。所述脸部特征样本包括眉、眼、鼻、口等脸部的图像特征以及各脸部构成的比例特征等。为了让所述形貌特征样本能够充分反映所述目标人物的形貌特点,并提高识别所述目标人物的准确性,在所述形貌特征样本中可以包括:所述目标人物多个角度的步履姿态的图像特征、多个角度的身高与体宽的比值特征、多个 角度的脸部的图像特征以及多个角度的各脸部构成的比例特征等。
S2:从所述形貌特征样本中提取图像特征集。
所述图像特征集的提取可以结合图像数据比对范围内的图像数据进行。具体而言,判断所述图像数据包含的形貌特征内容,根据所述形貌特征内容从所述形貌特征样本中提取包含相同所述形貌特征内容的图像特征组成所述图像特征集。
S3:在图像数据比对范围内提取图像数据与所述图像特征集进行比对。
所述图像数据包括体形图像数据和脸部图像数据,所述图像特征集包括体形特征集和脸部特征集。所述体形图像数据和所述脸部图像数据可以通过以下方式获得:对从所述路网中采集到的图像进行人体体形检测和人体脸部检测,获取所述体形图像数据和所述脸部图像数据,并按照时间进程建立所述体形图像数据与所述脸部图像数据之间的映射表。
所述图像数据比对范围为选定的至少一个摄像装置在一个时间段内的体形图像数据与脸部图像数据的集合。所述摄像装置包括但不限于道路卡口、城市闭路电视监控系统中的摄像头等。
参考图2,为本申请的一实施例中在图像数据比对范围内提取图像数据与所述图像特征集进行比对的步骤示意图。如图2中所示意的,所述在图像数据比对范围内提取图像数据与所述图像特征集进行比对的步骤包括:
S31:应用体形图像识别神经网络将所述体形图像数据中的各帧体形图像与所述体形特征集进行比对。
S32:计算所述各帧体形图像与所述体形特征集的第一相似度。
S33:应用脸部图像识别神经网络将所述脸部图像数据中的各帧脸部图像与所述脸部特征集进行比对。
S34:计算所述各帧脸部图像与所述脸部特征集的第二相似度。
所述计算所述各帧体形图像与所述体形特征集的第一相似度的方法包括:利用图像直方图计算所述第一相似度、利用平均哈希算法或感知哈希算法计算所述第一相似度、基于数学上的矩阵分解计算所述第一相似度、基于图像特征点计算第一相似度等。同样的,也可以上述方法计算所述各帧脸部图像与所述脸部特征集的第二相似度。
所述在图像数据比对范围内提取图像数据与所述图像特征集进行比对的步骤还包括:
S35:当所述体形图像与所述体形特征集的所述第一相似度达到第一阈值时,将所述体形图像提供给所述体形图像识别神经网络进行图像识别增强训练。
通过所述图像识别增强训练可以进一步提高所述体形图像识别神经网络对所述目标人物的体形图像的识别准确度。
S36:当所述脸部图像与所述脸部特征集的所述第二相似度达到第二阈值时,将所述脸部图像提供所述脸部图像识别神经网络进行图像识别增强训练。
通过所述图像识别增强训练可以进一步提高所述体形图像识别神经网络对所述目标人物的脸部图像的识别准确度。
所述体形图像识别神经网络和所述脸部图像识别神经网络可以采用卷积神经网络实现,对所述体形图像识别神经网络和所述脸部图像识别神经网络的训练过程包括两个阶段,第一阶段为初始训练阶段,第二阶段为图像识别增强训练阶段。在所述初始训练阶段通过人为选择所述目标人物的体形图像输入所述体形图像识别神经网络进行训练,通过人为选择所述目标人物的脸部图像进行训练。在所述图像识别增强训练阶段,将所述第一相似度达到所述第一阈值的所述体形图像提供给所述体形图像识别神经网络进行图像识别增强训练,扩大所述体形图像识别神经网络的训练样本,有利于提高所述体形图像识别神经网络识别的准确度;将所述第二相似度达到所述第二阈值的所述脸部图像提供所述脸部图像识别神经网络进行图像识别增强训练,扩大了所述脸部图像识别神经网络训练样本,有利于提高所述脸部图像识别神经网络识别的准确度。
在本申请的一些实施例中,所述第一阈值略大于所述第二阈值,以提高对所述目标人物体形识别的可靠程度。
需要说明的是,S31和S32与S33和S34的执行顺序并不分先后,S35与S36的执行顺序并不分先后。
S4:当有所述图像数据与所述图像特征集的相似度超过阈值时,确定所述目标人物在所述图像数据比对范围内的出现方位、消失方位、出现时间以及消失时间。
参考图3,为本申请的一实施例中确定所述目标人物在所述图像数据比对范围内的出现方位、消失方位、出现时间以及消失时间的步骤示意图。
如图3中所示意的,所述确定所述目标人物在所述图像数据比对范围内的出现方位、消失方位、出现时间以及消失时间的步骤包括:
S41:将所述相似度超过所述阈值的所述图像数据按照时间进程形成图像序 列;以所述图像序列中最早的时间点为所述出现时间,以所述图像序列中最晚的时间点为所述消失时间。
参考图4,为本申请的一实施例中将所述相似度超过所述阈值的所述图像数据按照时间进程形成图像序列的步骤示意图。如图4中所示意的,所述将所述相似度超过所述阈值的所述图像数据按照时间进程形成图像序列的步骤包括:
S411:将所述第一相似度达到所述第一阈值的所述体形图像数据按照时间进程形成体形图像序列。
S412:将所述第二相似度达到所述第二阈值的所述脸部图像数据按照时间进程形成脸部图像序列。
S413:将所述体形图像序列与所述脸部图像序列进行融合得到所述图像序列。
参考图5,为本申请的一实施例中将所述体形图像序列与所述脸部图像序列进行融合得到所述图像序列的步骤示意图。如图5中所示意的,所述将所述体形图像序列与所述脸部图像序列进行融合得到所述图像序列的步骤包括:
S413a:按照所述时间进程将所述体形图像序列中的各帧体形图像与所述脸部图像序列中的各帧脸部图像进行融合。
S413b:在所述体形图像序列中的一个所述时间进程的节点缺失所述体形图像时,通过所述脸部图像序列中相同所述节点的脸部图像进行补充。
S413c:在所述脸部图像序列中一个所述时间进程的节点缺失所述脸部图像时,通过所述体形图像序列中相同所述节点的体形图像进行补充。
S42:应用路网识别神经网络识别与所述出现时间对应的所述图像数据中所述目标人物在所述路网中的第一方位;以所述第一方位为所述出现方位。
S43:应用所述路网识别神经网络识别与所述消失时间对应的所述图像数据中所述目标人物在所述路网中的第二方位;以所述第二方位为所述消失方位。
所述路网识别神经网络首先在图像中识别所述目标人物所处的环境,然后结合所述路网判断并记录所述目标人物所处的位置,将所述目标人物位置的改变映射到所述路网中,以得出所述目标人物在所述路网中的方位变化。其中,与所述出现时间对应的方位即为所述第一方位,与所述消失时间对应的方位即为所述第二方位。所述目标人物所处的位置能够反映在所述图像中,因此所述目标人物在所述路网中的位置可以通过识别所述目标人物在所述图像中的位置间接获得。所述图像包括但不限于所述脸部图像和所述体形图像。
需要说明的是,S42与S43的执行顺序并不分先后,S411与S412的执行顺序并不分先后,S413b与S413c的执行顺序并不分先后。
S5:将所述出现方位、所述消失方位、所述出现时间以及所述消失时间标注到路网中形成所述目标人物的移动轨迹。
在本申请的一些实施例中,所述形成移动轨迹的方法还包括步骤:
S6:根据所述出现方位、所述消失方位、所述出现时间以及所述消失时间更新所述图像数据比对范围。
参考图6,为本申请的一实施例中根据所述出现方位、所述消失方位、所述出现时间以及所述消失时间更新所述图像数据比对范围的步骤示意图。
如图6中所示意的,所述根据所述出现方位、所述消失方位、所述出现时间以及所述消失时间更新所述图像数据比对范围的步骤包括:
S61:结合所述路网搜索所述出现方位的至少一个摄像装置以及所述消失方位的至少一个摄像装置。
S62:以所述出现时间为第一时间端点从所述出现方位的至少一个摄像装置的图像数据存储地址逆时间进程获取逆向图像数据。
S63:以所述消失时间为第二时间端点从所述消失方位的至少一个摄像装置的图像数据存储地址顺时间进程获取顺向图像数据。
S64:用所述逆向图像数据和所述顺向图像数据更新所述图像数据比对范围。
所述址逆时间进程获取逆向图像数据指的是在比所述第一时间端点更早的时间点获取所述逆向图像数据。例如,当所述出现时间为9:00am时,以9:00am为所述第一时间端点,从所述出现方位的至少一个摄像装置的图像数据存储地址获取8:59am的所述逆向图像数据。
所述顺时间进程获取顺向图像数据指的是在比所述第二时间端点更晚的时间点获取所述顺向图像数据。例如,当所述消失时间为9:05am时,以9:05am为所述第二时间端点,从所述消失方位的至少一个摄像装置的图像数据存储地址获取9:06am的的所述顺向图像数据。
进一步参考图1,在本申请的一些实施例中,根据所述出现方位、所述消失方位、所述出现时间以及所述消失时间更新所述图像数据比对范围后,继续执行S3,即在更新后的所述图像数据比对范围内提取图像数据与所述图像特征集进行比对。S3、S4、S5以及S6循环执行,不断将所述目标人物的新的所述出 现方位、所述消失方位、所述出现时间以及所述消失时间标注到路网中形成所述目标人物的连续的移动轨迹。
在本申请的实施例中,所述形成移动轨迹的方法在获取到的所述形貌特征样本中提取图像特征集,在图像数据比对范围内提取图像数据与所述图像特征集进行比对,进而在所述图像数据比对范围内识别出所述目标人物。当有所述图像数据与所述图像特征集的相似度超过阈值时,确定所述目标人物在所述图像数据比对范围内的出现方位、消失方位、出现时间以及消失时间。然后将所述出现方位、所述消失方位、所述出现时间以及所述消失时间标注到路网中形成所述目标人物的移动轨迹。所述形成移动轨迹的方法通过根据所述出现方位、所述消失方位、所述出现时间以及所述消失时间更新所述图像数据比对范围,进而在所述路网中的所述出现方位和所述消失方位上继续识别更新后的所述图像数据比对范围内所述目标人物,得出所述目标人物在所述路网中其余地方的出现方位、消失方位、出现时间以及消失时间。最终所述形成移动轨迹的方法能够获得所述目标人物在所述路网中完整有效的移动轨迹。所述形成移动轨迹的方法应用图像识别的人工智能技术,能够准确而快速地生成所述目标人物在所述路网中的移动轨迹,可以应用于失踪人员寻找、犯罪分子行进路径分析。
本申请的一实施例公开了一种形成移动轨迹的装置。
参考图7,为本申请的一实施例中所述形成移动轨迹的装置的示例图。如图7中所示意的,所述形成移动轨迹的装置包括:
形貌特征样本获取模块10,用于获取所述目标人物的形貌特征样本。
图像特征集提取模块20,用于从所述形貌特征样本中提取图像特征集。
比对模块30,用于在图像数据比对范围内提取图像数据与所述图像特征集进行比对。
目标人物确定模块40,用于当有所述图像数据与所述图像特征集的相似度超过阈值时,确定所述目标人物在所述图像数据比对范围内的出现方位、消失方位、出现时间以及消失时间。
移动轨迹标注模块50,用于将所述出现方位、所述消失方位、所述出现时间以及所述消失时间标注到路网中形成所述目标人物的移动轨迹。
参考图8,为本申请的一实施例中所述比对模块30的示例图。如图8中所示意的,在本申请的一些实施例中,所述比对模块30包括:
体形图像识别子模块31,用于应用体形图像识别神经网络将所述体形图像 数据中的各帧体形图像与所述体形特征集进行比对,并计算所述各帧体形图像与所述体形特征集的第一相似度。
脸部图像识别子模块32,用于应用脸部图像识别神经网络将所述脸部图像数据中的各帧脸部图像与所述脸部特征集进行比对,并计算所述各帧脸部图像与所述脸部特征集的第二相似度。
在本申请的一些实施例中,所述比对模块30还包括图像识别增强训练子模块33,用于当所述体形图像与所述体形特征集的所述第一相似度达到第一阈值时,将所述体形图像提供给所述体形图像识别神经网络进行图像识别增强训练;当所述脸部图像与所述脸部特征集的所述第二相似度达到第二阈值时,将所述脸部图像提供所述脸部图像识别神经网络进行图像识别增强训练。
参考图9,为本申请的一实施例中所述目标人物确定模块40的示例图。如图9中所示意的,在本申请的一些实施例中,所述目标人物确定模块40包括:
时间确定子模块41,用于将所述相似度超过所述阈值的所述图像数据按照时间进程形成图像序列;以所述图像序列中最早的时间点为所述出现时间,以所述图像序列中最晚的时间点为所述消失时间。
方位确定子模块42,用于应用路网识别神经网络识别与所述出现时间对应的所述图像数据中所述目标人物在所述路网中的第一方位;以所述第一方位为所述出现方位。
所述方位确定子模块42还用于应用所述路网识别神经网络识别与所述消失时间对应的所述图像数据中所述目标人物在所述路网中的第二方位;以所述第二方位为所述消失方位。
参考图10,为本申请的一实施例中所述时间确定子模块41的示例图。如图10中所示意的,在本申请的一些实施例中,所述时间确定子模块41包括:
体形图像序列单元411,用于将所述第一相似度达到第一阈值的所述体形图像数据按照时间进程形成体形图像序列。
脸部图像序列单元412,用于将所述第二相似度达到第二阈值的所述脸部图像数据按照时间进程形成脸部图像序列。
融合单元413,用于将所述体形图像序列与所述脸部图像序列进行融合得到所述图像序列。
在本申请的一些实施例中,所述融合单元413按照所述时间进程将所述体形图像序列中的各帧体形图像与所述脸部图像序列中的各帧脸部图像进行融 合;在所述体形图像序列中的一个所述时间进程的节点缺失所述体形图像时,通过所述脸部图像序列中相同所述节点的脸部图像进行补充;在所述脸部图像序列中一个所述时间进程的节点缺失所述脸部图像时,通过所述体形图像序列中相同所述节点的体形图像进行补充。
在本申请的一些实施例中,所述形成移动轨迹的装置还包括:比对范围更新模块60,用于根据所述出现方位、所述消失方位、所述出现时间以及所述消失时间更新所述图像数据比对范围。
参考图11,为本申请的一实施例中所述比对范围更新模块60的示例图。如图11中所示意的,在本申请的一些实施例中,所述比对范围更新模块60包括:
摄像装置搜索子模块61,用于结合所述路网搜索所述出现方位的至少一个摄像装置以及所述消失方位的至少一个摄像装置。
逆向图像数据获取子模块62,用于以所述出现时间为第一时间端点从所述出现方位的至少一个摄像装置的图像数据存储地址逆时间进程获取逆向图像数据。
顺向图像数据获取子模块63,用于以所述消失时间为第二时间端点从所述消失方位的至少一个摄像装置的图像数据存储地址顺时间进程获取顺向图像数据。
图像数据更新子模块64,用所述逆向图像数据和所述顺向图像数据更新所述图像数据比对范围。
本申请的一实施例公开了一种计算机设备。具体请参考图12,为本申请的一实施例中计算机设备100基本结构框图。
如图12中所示意的,所述计算机设备100包括通过系统总线相互通信连接存储器101、处理器102、网络接口103。需要指出的是,图12中仅示出了具有组件101-103的计算机设备100,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。本技术领域技术人员应当理解,这里的计算机设备是一种能够按照事先设定或存储的指令,自动进行数值计算和/或信息处理的设备,其硬件包括但不限于微处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程门阵列(Field-Programmable Gate Array,FPGA)、数字处理器(Digital Signal Processor,DSP)、嵌入式设备等。
所述计算机设备可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述计算机设备可以与用户通过键盘、鼠标、遥控器、触摸板或声 控设备等方式进行人机交互。
所述存储器101至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,所述存储器101可以是所述计算机设备100的内部存储单元,例如该计算机设备100的硬盘或内存。在另一些实施例中,所述存储器101也可以是所述计算机设备100的外部存储设备,例如该计算机设备100上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。当然,所述存储器101还可以既包括所述计算机设备100的内部存储单元也包括其外部存储设备。本实施例中,所述存储器101通常用于存储安装于所述计算机设备100的操作系统和各类应用软件,例如上述形成移动轨迹的方法的计算机可读指令等。此外,所述存储器101还可以用于暂时地存储已经输出或者将要输出的各类数据。
所述处理器102在一些实施例中可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理器102通常用于控制所述计算机设备100的总体操作。本实施例中,所述处理器102用于运行所述存储器101中存储的计算机可读指令或者处理数据,例如运行上述形成移动轨迹的方法的计算机可读指令。
所述网络接口103可包括无线网络接口或有线网络接口,该网络接口103通常用于在所述计算机设备100与其他电子设备之间建立通信连接。
本申请还提供了另一种实施方式,即提供一个或多个非易失性可读存储介质,所述非易失性可读存储介质存储有单据信息录入流程,所述单据信息录入流程可被至少一个处理器执行,以使所述至少一个处理器执行上述任意一种形成移动轨迹的方法的步骤。
最后应说明的是,显然以上所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例,附图中给出了本申请的较佳实施例,但并不限制本申请的专利范围。本申请可以以许多不同的形式来实现,相反地,提供这些实施例的目的是使对本申请的公开内容的理解更加透彻全面。尽管参照前述实施例对本申请进行了详细的说明,对于本领域的技术人员来而言,其依然可以对前述各具体实施方式所记载的技术方案进行修改,或者对其中部分技术特征进行等 效替换。凡是利用本申请说明书及附图内容所做的等效结构,直接或间接运用在其他相关的技术领域,均同理在本申请专利保护范围之内。

Claims (20)

  1. 一种形成移动轨迹的方法,用于形成目标人物的移动轨迹,其特征在于,包括:
    获取所述目标人物的形貌特征样本;
    从所述形貌特征样本中提取图像特征集;
    在路网的图像数据比对范围内提取图像数据与所述图像特征集进行比对;
    当有所述图像数据与所述图像特征集的相似度超过阈值时,确定所述目标人物在所述图像数据比对范围内的出现方位、消失方位、出现时间以及消失时间;
    将所述出现方位、所述消失方位、所述出现时间以及所述消失时间标注到所述路网中形成所述目标人物的移动轨迹。
  2. 根据权利要求1所述形成移动轨迹的方法,其特征在于,所述图像数据包括体形图像数据和脸部图像数据,所述图像特征集包括体形特征集和脸部特征集,所述在图像数据比对范围内提取图像数据与所述图像特征集进行比对的步骤包括:
    应用体形图像识别神经网络将所述体形图像数据中的各帧体形图像与所述体形特征集进行比对;计算所述各帧体形图像与所述体形特征集的第一相似度;
    应用脸部图像识别神经网络将所述脸部图像数据中的各帧脸部图像与所述脸部特征集进行比对;计算所述各帧脸部图像与所述脸部特征集的第二相似度。
  3. 根据权利要求2所述形成移动轨迹的方法,其特征在于,所述在图像数据比对范围内提取图像数据与所述图像特征集进行比对的步骤还包括:
    当所述体形图像与所述体形特征集的所述第一相似度达到第一阈值时,将所述体形图像提供给所述体形图像识别神经网络进行图像识别增强训练;
    当所述脸部图像与所述脸部特征集的所述第二相似度达到第二阈值时,将所述脸部图像提供所述脸部图像识别神经网络进行图像识别增强训练。
  4. 根据权利要求1所述形成移动轨迹的方法,其特征在于,所述确定所述目标人物在所述图像数据比对范围内的出现方位、消失方位、出现时间以及消失时间的步骤包括:
    将所述相似度超过所述阈值的所述图像数据按照时间进程形成图像序列;以所述图像序列中最早的时间点为所述出现时间,以所述图像序列中最晚的时 间点为所述消失时间;
    应用路网识别神经网络识别与所述出现时间对应的所述图像数据中所述目标人物在所述路网中的第一方位;以所述第一方位为所述出现方位;
    应用所述路网识别神经网络识别与所述消失时间对应的所述图像数据中所述目标人物在所述路网中的第二方位;以所述第二方位为所述消失方位。
  5. 根据权利要求4所述形成移动轨迹的方法,其特征在于,所述图像数据包括体形图像数据和脸部图像数据,所述相似度包括第一相似度和第二相似度;所述阈值包括第一阈值和第二阀值;所述将所述相似度超过所述阈值的所述图像数据按照时间进程形成图像序列的步骤包括:
    将所述第一相似度达到所述第一阈值的所述体形图像数据按照时间进程形成体形图像序列;
    将所述第二相似度达到所述第二阈值的所述脸部图像数据按照时间进程形成脸部图像序列;
    将所述体形图像序列与所述脸部图像序列进行融合得到所述图像序列。
  6. 根据权利要求5所述形成移动轨迹的方法,其特征在于,所述将所述体形图像序列与所述脸部图像序列进行融合得到所述图像序列的步骤包括:
    按照所述时间进程将所述体形图像序列中的各帧体形图像与所述脸部图像序列中的各帧脸部图像进行融合;
    在所述体形图像序列中的一个所述时间进程的节点缺失所述体形图像时,通过所述脸部图像序列中相同所述节点的脸部图像进行补充;
    在所述脸部图像序列中一个所述时间进程的节点缺失所述脸部图像时,通过所述体形图像序列中相同所述节点的体形图像进行补充。
  7. 根据权利要求1所述形成移动轨迹的方法,其特征在于,所述确定所述目标人物在所述图像数据比对范围内的出现方位、消失方位、出现时间以及消失时间的步骤之后,所述方法还包括步骤:
    根据所述出现方位、所述消失方位、所述出现时间以及所述消失时间更新所述图像数据比对范围:
    结合所述路网搜索所述出现方位的至少一个摄像装置以及所述消失方位的至少一个摄像装置;
    以所述出现时间为第一时间端点从所述出现方位的至少一个摄像装置的图像数据存储地址逆时间进程获取逆向图像数据;
    以所述消失时间为第二时间端点从所述消失方位的至少一个摄像装置的图像数据存储地址顺时间进程获取顺向图像数据;
    用所述逆向图像数据和所述顺向图像数据更新所述图像数据比对范围。
  8. 一种形成移动轨迹的装置,用于形成目标人物的移动轨迹,其特征在于,包括:
    形貌特征样本获取模块,用于获取所述目标人物的形貌特征样本;
    图像特征集提取模块,用于从所述形貌特征样本中提取图像特征集;
    比对模块,用于在路网的图像数据比对范围内提取图像数据与所述图像特征集进行比对;
    目标人物确定模块,用于当有所述图像数据与所述图像特征集的相似度超过阈值时,确定所述目标人物在所述图像数据比对范围内的出现方位、消失方位、出现时间以及消失时间;
    移动轨迹标注模块,用于将所述出现方位、所述消失方位、所述出现时间以及所述消失时间标注到所述路网中形成所述目标人物的移动轨迹。
  9. 一种计算机设备,包括存储器和处理器,其特征在于,所述存储器中存储有计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:
    获取所述目标人物的形貌特征样本;
    从所述形貌特征样本中提取图像特征集;
    在路网的图像数据比对范围内提取图像数据与所述图像特征集进行比对;
    当有所述图像数据与所述图像特征集的相似度超过阈值时,确定所述目标人物在所述图像数据比对范围内的出现方位、消失方位、出现时间以及消失时间;
    将所述出现方位、所述消失方位、所述出现时间以及所述消失时间标注到所述路网中形成所述目标人物的移动轨迹。
  10. 根据权利要求9所述的计算机设备,其特征在于,所述图像数据包括体形图像数据和脸部图像数据,所述图像特征集包括体形特征集和脸部特征集,所述在图像数据比对范围内提取图像数据与所述图像特征集进行比对的步骤包括:
    应用体形图像识别神经网络将所述体形图像数据中的各帧体形图像与所述体形特征集进行比对;计算所述各帧体形图像与所述体形特征集的第一相似度;
    应用脸部图像识别神经网络将所述脸部图像数据中的各帧脸部图像与所述 脸部特征集进行比对;计算所述各帧脸部图像与所述脸部特征集的第二相似度。
  11. 根据权利要求10所述的计算机设备,其特征在于,所述在图像数据比对范围内提取图像数据与所述图像特征集进行比对的步骤还包括:
    当所述体形图像与所述体形特征集的所述第一相似度达到第一阈值时,将所述体形图像提供给所述体形图像识别神经网络进行图像识别增强训练;
    当所述脸部图像与所述脸部特征集的所述第二相似度达到第二阈值时,将所述脸部图像提供所述脸部图像识别神经网络进行图像识别增强训练。
  12. 根据权利要求9所述的计算机设备,其特征在于,所述确定所述目标人物在所述图像数据比对范围内的出现方位、消失方位、出现时间以及消失时间的步骤包括:
    将所述相似度超过所述阈值的所述图像数据按照时间进程形成图像序列;以所述图像序列中最早的时间点为所述出现时间,以所述图像序列中最晚的时间点为所述消失时间;
    应用路网识别神经网络识别与所述出现时间对应的所述图像数据中所述目标人物在所述路网中的第一方位;以所述第一方位为所述出现方位;
    应用所述路网识别神经网络识别与所述消失时间对应的所述图像数据中所述目标人物在所述路网中的第二方位;以所述第二方位为所述消失方位。
  13. 根据权利要求12所述的计算机设备,其特征在于,所述图像数据包括体形图像数据和脸部图像数据,所述相似度包括第一相似度和第二相似度;所述阈值包括第一阈值和第二阀值;所述将所述相似度超过所述阈值的所述图像数据按照时间进程形成图像序列的步骤包括:
    将所述第一相似度达到所述第一阈值的所述体形图像数据按照时间进程形成体形图像序列;
    将所述第二相似度达到所述第二阈值的所述脸部图像数据按照时间进程形成脸部图像序列;
    将所述体形图像序列与所述脸部图像序列进行融合得到所述图像序列。
  14. 根据权利要求13所述的计算机设备,其特征在于,所述将所述体形图像序列与所述脸部图像序列进行融合得到所述图像序列的步骤包括:
    按照所述时间进程将所述体形图像序列中的各帧体形图像与所述脸部图像序列中的各帧脸部图像进行融合;
    在所述体形图像序列中的一个所述时间进程的节点缺失所述体形图像时, 通过所述脸部图像序列中相同所述节点的脸部图像进行补充;
    在所述脸部图像序列中一个所述时间进程的节点缺失所述脸部图像时,通过所述体形图像序列中相同所述节点的体形图像进行补充。
  15. 一个或多个非易失性可读存储介质,其特征在于,所述非易失性可读存储介质上存储有计算机可读指令,所述计算机可读指令被一种处理器执行时,使得所述处理器执行如下步骤:
    获取所述目标人物的形貌特征样本;
    从所述形貌特征样本中提取图像特征集;
    在路网的图像数据比对范围内提取图像数据与所述图像特征集进行比对;
    当有所述图像数据与所述图像特征集的相似度超过阈值时,确定所述目标人物在所述图像数据比对范围内的出现方位、消失方位、出现时间以及消失时间;
    将所述出现方位、所述消失方位、所述出现时间以及所述消失时间标注到所述路网中形成所述目标人物的移动轨迹。
  16. 根据权利要求15所述的非易失性可读存储介质,其特征在于,所述图像数据包括体形图像数据和脸部图像数据,所述图像特征集包括体形特征集和脸部特征集,所述在图像数据比对范围内提取图像数据与所述图像特征集进行比对的步骤包括:
    应用体形图像识别神经网络将所述体形图像数据中的各帧体形图像与所述体形特征集进行比对;计算所述各帧体形图像与所述体形特征集的第一相似度;
    应用脸部图像识别神经网络将所述脸部图像数据中的各帧脸部图像与所述脸部特征集进行比对;计算所述各帧脸部图像与所述脸部特征集的第二相似度。
  17. 根据权利要求16所述的非易失性可读存储介质,其特征在于,所述在图像数据比对范围内提取图像数据与所述图像特征集进行比对的步骤还包括:
    当所述体形图像与所述体形特征集的所述第一相似度达到第一阈值时,将所述体形图像提供给所述体形图像识别神经网络进行图像识别增强训练;
    当所述脸部图像与所述脸部特征集的所述第二相似度达到第二阈值时,将所述脸部图像提供所述脸部图像识别神经网络进行图像识别增强训练。
  18. 根据权利要求15所述的非易失性可读存储介质,其特征在于,所述确定所述目标人物在所述图像数据比对范围内的出现方位、消失方位、出现时间以及消失时间的步骤包括:
    将所述相似度超过所述阈值的所述图像数据按照时间进程形成图像序列;以所述图像序列中最早的时间点为所述出现时间,以所述图像序列中最晚的时间点为所述消失时间;
    应用路网识别神经网络识别与所述出现时间对应的所述图像数据中所述目标人物在所述路网中的第一方位;以所述第一方位为所述出现方位;
    应用所述路网识别神经网络识别与所述消失时间对应的所述图像数据中所述目标人物在所述路网中的第二方位;以所述第二方位为所述消失方位。
  19. 根据权利要求18所述的非易失性可读存储介质,其特征在于,所述图像数据包括体形图像数据和脸部图像数据,所述相似度包括第一相似度和第二相似度;所述阈值包括第一阈值和第二阀值;所述将所述相似度超过所述阈值的所述图像数据按照时间进程形成图像序列的步骤包括:
    将所述第一相似度达到所述第一阈值的所述体形图像数据按照时间进程形成体形图像序列;
    将所述第二相似度达到所述第二阈值的所述脸部图像数据按照时间进程形成脸部图像序列;
    将所述体形图像序列与所述脸部图像序列进行融合得到所述图像序列。
  20. 根据权利要求19所述的非易失性可读存储介质,其特征在于,所述将所述体形图像序列与所述脸部图像序列进行融合得到所述图像序列的步骤包括:
    按照所述时间进程将所述体形图像序列中的各帧体形图像与所述脸部图像序列中的各帧脸部图像进行融合;
    在所述体形图像序列中的一个所述时间进程的节点缺失所述体形图像时,通过所述脸部图像序列中相同所述节点的脸部图像进行补充;
    在所述脸部图像序列中一个所述时间进程的节点缺失所述脸部图像时,通过所述体形图像序列中相同所述节点的体形图像进行补充。
PCT/CN2019/103180 2019-05-07 2019-08-29 形成移动轨迹的方法、装置、计算机设备及存储介质 WO2020224116A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910377560.1A CN110276244B (zh) 2019-05-07 2019-05-07 形成移动轨迹的方法、装置、计算机设备及存储介质
CN201910377560.1 2019-05-07

Publications (1)

Publication Number Publication Date
WO2020224116A1 true WO2020224116A1 (zh) 2020-11-12

Family

ID=67959802

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/103180 WO2020224116A1 (zh) 2019-05-07 2019-08-29 形成移动轨迹的方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN110276244B (zh)
WO (1) WO2020224116A1 (zh)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107644204A (zh) * 2017-09-12 2018-01-30 南京凌深信息科技有限公司 一种用于安防系统的人体识别与跟踪方法
CN107909025A (zh) * 2017-11-13 2018-04-13 毛国强 基于视频和无线监控的人物识别及追踪方法和系统
CN109194619A (zh) * 2018-08-06 2019-01-11 湖南深纳数据有限公司 一种应用于智慧安防的大数据服务系统

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101163940B (zh) * 2005-04-25 2013-07-24 株式会社吉奥技术研究所 摄影位置分析方法
CN107423674A (zh) * 2017-05-15 2017-12-01 广东数相智能科技有限公司 一种基于人脸识别的寻人方法、电子设备及存储介质
CN108288025A (zh) * 2017-12-22 2018-07-17 深圳云天励飞技术有限公司 一种车载视频监控方法、装置及设备

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107644204A (zh) * 2017-09-12 2018-01-30 南京凌深信息科技有限公司 一种用于安防系统的人体识别与跟踪方法
CN107909025A (zh) * 2017-11-13 2018-04-13 毛国强 基于视频和无线监控的人物识别及追踪方法和系统
CN109194619A (zh) * 2018-08-06 2019-01-11 湖南深纳数据有限公司 一种应用于智慧安防的大数据服务系统

Also Published As

Publication number Publication date
CN110276244B (zh) 2024-04-09
CN110276244A (zh) 2019-09-24

Similar Documents

Publication Publication Date Title
US11455735B2 (en) Target tracking method, device, system and non-transitory computer readable storage medium
WO2020042419A1 (zh) 基于步态的身份识别方法、装置、电子设备
WO2020038136A1 (zh) 面部识别方法、装置、电子设备及计算机可读介质
CN108446585B (zh) 目标跟踪方法、装置、计算机设备和存储介质
WO2021103721A1 (zh) 基于部件分割的识别模型训练、车辆重识别方法及装置
Zeng et al. Silhouette-based gait recognition via deterministic learning
CN109145742B (zh) 一种行人识别方法及系统
US10762644B1 (en) Multiple object tracking in video by combining neural networks within a bayesian framework
CN109657533A (zh) 行人重识别方法及相关产品
CN110705478A (zh) 人脸跟踪方法、装置、设备及存储介质
US9911053B2 (en) Information processing apparatus, method for tracking object and program storage medium
KR102132722B1 (ko) 영상 내 다중 객체 추적 방법 및 시스템
CN109492576B (zh) 图像识别方法、装置及电子设备
CN110009060B (zh) 一种基于相关滤波与目标检测的鲁棒性长期跟踪方法
CN108898623A (zh) 目标跟踪方法及设备
CN113608663B (zh) 一种基于深度学习和k-曲率法的指尖跟踪方法
JP2022082493A (ja) ノイズチャネルに基づくランダム遮蔽回復の歩行者再識別方法
Bhuyan et al. Trajectory guided recognition of hand gestures having only global motions
CN112541403A (zh) 一种利用红外摄像头的室内人员跌倒检测方法
CN112329602A (zh) 人脸标注图像的获取方法及装置、电子设备、存储介质
CN109858464B (zh) 底库数据处理方法、人脸识别方法、装置和电子设备
CN114998628A (zh) 基于模板匹配的孪生网络长时目标跟踪方法
CN113989929A (zh) 人体动作识别方法、装置、电子设备及计算机可读介质
CN113989914B (zh) 一种基于人脸识别的安防监控方法及系统
WO2020224116A1 (zh) 形成移动轨迹的方法、装置、计算机设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19927646

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19927646

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01.06.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19927646

Country of ref document: EP

Kind code of ref document: A1