WO2020224116A1 - 形成移动轨迹的方法、装置、计算机设备及存储介质 - Google Patents
形成移动轨迹的方法、装置、计算机设备及存储介质 Download PDFInfo
- Publication number
- WO2020224116A1 WO2020224116A1 PCT/CN2019/103180 CN2019103180W WO2020224116A1 WO 2020224116 A1 WO2020224116 A1 WO 2020224116A1 CN 2019103180 W CN2019103180 W CN 2019103180W WO 2020224116 A1 WO2020224116 A1 WO 2020224116A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- image data
- time
- body shape
- feature set
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
Definitions
- This application belongs to the field of artificial intelligence technology, and relates to methods, devices, computer equipment, and storage media for forming movement tracks.
- the urban closed-circuit television monitoring system and the road checkpoint play a substantial role in reducing urban crime, increasing the rate of urban crime detection, and finding missing persons.
- the formation of the movement trajectory of the target person through the urban closed-circuit television monitoring system and the road bayonet is generally achieved manually. For example, by manually identifying the target person and drawing the movement track of the target person.
- the embodiment of the application discloses a method, a device, a computer device, and a storage medium for forming a movement track, aiming to accurately and quickly form a movement track of a target person.
- Some embodiments of the present application disclose a method for forming a movement track, including:
- An embodiment of the present application discloses a device for forming a movement track, including:
- the profile feature sample acquisition module is used to acquire the profile feature sample of the target person; the image feature set extraction module is used to extract the image feature set from the profile feature sample; the comparison module is used in the road network The extracted image data is compared with the image feature set within the image data comparison range; the target person determination module is used to determine the target when the similarity between the image data and the image feature set exceeds a threshold The appearance position, disappearance position, appearance time, and disappearance time of the person within the comparison range of the image data; the movement track marking module is used to mark the appearance position, the disappearance position, the appearance time and the disappearance time Marking into the road network forms the movement track of the target person.
- An embodiment of the present application discloses a computer device including a memory and a processor.
- the memory stores computer-readable instructions.
- the processor executes the computer-readable instructions, the method for forming a movement track is implemented. step:
- An embodiment of the present application discloses one or more non-volatile readable storage media, the non-volatile readable storage medium stores computer readable instructions, and the computer readable instructions are used by a processor When executed, the processor is caused to execute the steps of the method for forming a movement track.
- FIG. 1 is a schematic diagram of a method for forming a movement track according to an embodiment of the application
- FIG. 2 is a schematic diagram of the steps of extracting image data and comparing the image feature set within the image data comparison range in an embodiment of the application;
- 3 is a schematic diagram of the steps of determining the appearance position, disappearance position, appearance time, and disappearance time of the target person within the image data comparison range in an embodiment of the application;
- FIG. 4 is a schematic diagram of the steps of forming the image data whose similarity exceeds the threshold value into an image sequence according to a time course in an embodiment of the application;
- FIG. 5 is a schematic diagram of the steps of fusing the body shape image sequence and the facial image sequence to obtain the image sequence in an embodiment of the application;
- FIG. 6 is a schematic diagram of the steps of updating the comparison range of the image data according to the appearance position, the disappearance position, the appearance time, and the disappearance time in an embodiment of the application;
- FIG. 7 is an example diagram of the device for forming a movement track in an embodiment of the application.
- FIG. 8 is an example diagram of the comparison module 30 in an embodiment of the application.
- FIG. 9 is an example diagram of the target person determining module 40 in an embodiment of this application.
- FIG. 10 is an example diagram of the time determination sub-module 41 in an embodiment of this application.
- FIG. 11 is an example diagram of the comparison range update module 60 in an embodiment of this application.
- FIG. 12 is a block diagram of the basic structure of the computer device 100 in an embodiment of the application.
- An embodiment of the present application discloses a method for forming a movement track, which is used to form a movement track of a target person.
- FIG. 1 it is a schematic diagram of the method of forming a movement track described in an embodiment of the application.
- the method for forming a movement track includes:
- the profile feature sample of the target person can be obtained in the following manner. For example, when the target person is a missing person, the relatives of the target person can provide the police with photos, videos, etc. of the target person, and then the police can make the photos, videos, etc. of the target person into the
- the profile feature samples are stored in a storage medium, and the profile feature samples of the target person are obtained from the storage medium.
- the physical feature samples include the physical feature samples and facial feature samples of the target person.
- the body shape feature samples include: image features of walking posture, ratio features of height to body width, and the like.
- the facial feature samples include facial image features such as eyebrows, eyes, nose, mouth, etc., and proportional features of each face composition.
- the profile feature sample may include: The image characteristics of the walking posture, the ratio characteristics of the height to the body width at multiple angles, the image characteristics of the face at multiple angles, and the proportional characteristics of each face at multiple angles.
- the extraction of the image feature set can be performed in conjunction with image data in the image data comparison range. Specifically, determining the topographic feature content contained in the image data, and extracting image features containing the same topographic feature content from the topographic feature sample according to the topographic feature content to form the image feature set.
- the image data includes body shape image data and face image data
- the image feature set includes a body shape feature set and a face feature set.
- the body shape image data and the face image data may be obtained in the following manner: performing body shape detection and human face detection on images collected from the road network, and obtaining the body shape image data and the face Image data, and establish a mapping table between the body image data and the face image data according to the time course.
- the image data comparison range is a collection of body image data and facial image data of the selected at least one camera device in a period of time.
- the camera device includes, but is not limited to, a road bayonet, a camera in an urban closed-circuit television monitoring system, and the like.
- FIG. 2 it is a schematic diagram of the steps of extracting image data and comparing the image feature set within the image data comparison range in an embodiment of this application.
- the step of comparing the extracted image data with the image feature set in the image data comparison range includes:
- S31 Use a body shape image recognition neural network to compare each frame of body shape images in the body shape image data with the body shape feature set.
- S32 Calculate the first similarity between the body shape image of each frame and the body shape feature set.
- S34 Calculate a second degree of similarity between the facial image of each frame and the facial feature set.
- the method for calculating the first similarity between the body shape image of each frame and the body shape feature set includes: calculating the first similarity degree using an image histogram, and calculating the first similarity degree using an average hash algorithm or a perceptual hash algorithm A degree of similarity, calculating the first degree of similarity based on mathematical matrix decomposition, calculating the first degree of similarity based on image feature points, and the like.
- the second degree of similarity between the facial image of each frame and the facial feature set can also be calculated by the above method.
- the step of extracting image data within the image data comparison range and comparing with the image feature set further includes:
- the recognition accuracy of the body shape image of the target person can be further improved by the body shape image recognition neural network.
- the image recognition enhancement training can further improve the recognition accuracy of the body image recognition neural network on the face image of the target person.
- the body image recognition neural network and the facial image recognition neural network can be realized by using a convolutional neural network.
- the training process of the body image recognition neural network and the facial image recognition neural network includes two stages. The first stage is the initial training stage, and the second stage is the image recognition enhancement training stage. In the initial training stage, the body image of the target person is manually selected and input into the body image recognition neural network for training, and the face image of the target person is manually selected for training.
- the body shape images whose first similarity reaches the first threshold are provided to the body shape image recognition neural network for image recognition enhancement training, and the body shape image recognition neural network is expanded Training samples of, help to improve the recognition accuracy of the body image recognition neural network; provide the facial image with the second similarity to the second threshold to the facial image recognition neural network for image recognition Enhanced training expands the training samples of the facial image recognition neural network, which is beneficial to improve the recognition accuracy of the facial image recognition neural network.
- the first threshold is slightly larger than the second threshold, so as to improve the reliability of the object shape recognition of the target person.
- this is a schematic diagram of the steps for determining the appearance position, disappearance position, appearance time, and disappearance time of the target person within the image data comparison range in an embodiment of the application.
- the step of determining the appearance position, disappearance position, appearance time, and disappearance time of the target person within the image data comparison range includes:
- S41 The image data whose similarity exceeds the threshold value is formed into an image sequence according to a time course; the earliest time point in the image sequence is the appearance time, and the latest time point in the image sequence Is the disappearance time.
- FIG. 4 it is a schematic diagram of the steps of forming the image data whose similarity exceeds the threshold value into an image sequence according to a time course in an embodiment of the application.
- the step of forming the image data whose similarity degree exceeds the threshold value into an image sequence according to a time course includes:
- S411 Form the body shape image data whose first similarity degree reaches the first threshold value into a body shape image sequence according to a time course.
- S412 Form the facial image data with the second similarity reaching the second threshold into a facial image sequence according to a time course.
- S413 Fuse the body shape image sequence and the face image sequence to obtain the image sequence.
- the step of fusing the body shape image sequence and the facial image sequence to obtain the image sequence includes:
- S413a Fuse each frame of the body shape image in the body shape image sequence with each frame of the face image in the face image sequence according to the time progress.
- S42 Apply a road network recognition neural network to recognize the first position of the target person in the road network in the image data corresponding to the appearance time; take the first position as the appearance position.
- the road network recognition neural network first recognizes the environment where the target person is located in the image, then judges and records the location of the target person in combination with the road network, and maps the change of the target person's position to the target person. In the road network, to obtain the position change of the target person in the road network.
- the orientation corresponding to the appearance time is the first orientation
- the orientation corresponding to the disappearance time is the second orientation.
- the location of the target person can be reflected in the image, so the location of the target person in the road network can be obtained indirectly by identifying the location of the target person in the image.
- the image includes but is not limited to the face image and the body shape image.
- execution order of S42 and S43 is not prioritized, the execution order of S411 and S412 is not prioritized, and the execution order of S413b and S413c is not prioritized.
- S5 Mark the appearance position, the disappearance position, the appearance time, and the disappearance time into a road network to form a movement track of the target person.
- the method for forming a movement track further includes the steps:
- S6 Update the image data comparison range according to the appearance direction, the disappearance position, the appearance time, and the disappearance time.
- the step of updating the comparison range of the image data according to the appearance position, the disappearance position, the appearance time, and the disappearance time includes:
- S61 Search for the at least one camera device in the appearing position and the at least one camera device in the disappearing position in combination with the road network.
- S62 Use the appearance time as the first time endpoint to obtain reverse image data from the image data storage address of the at least one camera device in the appearance position in a reverse time process.
- S63 Use the disappearance time as the second time endpoint to acquire forward image data from the image data storage address of the at least one camera device in the disappearance direction in a chronological process.
- the acquisition of reverse image data by the address reverse time process refers to acquiring the reverse image data at a time point earlier than the first time endpoint. For example, when the appearance time is 9:00am, take 9:00am as the first time endpoint, and obtain the reverse image data at 8:59am from the image data storage address of at least one camera device in the appearance position .
- the acquiring forward image data in a time-wise process refers to acquiring the forward image data at a time point later than the second time endpoint. For example, when the disappearing time is 9:05am, take 9:05am as the second time endpoint, and obtain the forward direction of 9:06am from the image data storage address of at least one camera device in the disappearing position Image data.
- the disappearance orientation, the appearance time, and the disappearance time continue to execute S3, that is, in The extracted image data within the updated image data comparison range is compared with the image feature set.
- S3, S4, S5, and S6 are executed cyclically, and the new appearance position, disappearance position, appearance time, and disappearance time of the target person are continuously marked into the road network to form a continuity of the target person The movement trajectory.
- the method for forming a movement track extracts an image feature set from the acquired shape feature samples, and compares the image data with the image feature set within the range of image data comparison , And then identify the target person within the image data comparison range.
- identify the target person within the image data comparison range When the similarity between the image data and the image feature set exceeds a threshold, determine the appearance, disappearance, appearance time, and disappearance time of the target person within the comparison range of the image data. Then, the appearance direction, the disappearance position, the appearance time, and the disappearance time are marked on the road network to form the movement track of the target person.
- the method for forming a movement track updates the image data comparison range according to the appearance position, the disappearance position, the appearance time, and the disappearance time, and then the appearance position and the disappearance time in the road network are updated. Continue to identify the target person within the updated image data comparison range in the disappearance direction, and obtain the appearance position, disappearance position, appearance time, and disappearance time of the target person in the rest of the road network. Finally, the method of forming a movement track can obtain a complete and effective movement track of the target person in the road network.
- the method of forming a movement track applies artificial intelligence technology of image recognition, can accurately and quickly generate the movement track of the target person in the road network, and can be applied to search for missing persons and analyze the travel path of criminals.
- An embodiment of the application discloses a device for forming a movement track.
- FIG. 7 is an example diagram of the device for forming a moving track in an embodiment of the application.
- the device for forming a movement track includes:
- the profile feature sample acquisition module 10 is used to acquire profile feature samples of the target person.
- the image feature set extraction module 20 is used to extract an image feature set from the profile feature samples.
- the comparison module 30 is used for comparing the extracted image data with the image feature set within the image data comparison range.
- the target person determining module 40 is configured to determine the appearance, disappearance, and appearance time of the target person within the comparison range of the image data when the similarity between the image data and the image feature set exceeds a threshold And the disappearance time.
- the movement track marking module 50 is configured to mark the appearance direction, the disappearance direction, the appearance time, and the disappearance time into a road network to form a movement track of the target person.
- the comparison module 30 includes:
- the body shape image recognition sub-module 31 is configured to apply a body shape image recognition neural network to compare each frame of body shape image in the body shape image data with the body shape feature set, and calculate the body shape image of each frame and the body shape feature The first similarity of the set.
- the facial image recognition sub-module 32 is used to apply a facial image recognition neural network to compare each frame of facial image in the facial image data with the facial feature set, and calculate the facial image of each frame The second degree of similarity between the image and the facial feature set.
- the comparison module 30 further includes an image recognition enhancement training sub-module 33, which is used when the first similarity between the body shape image and the body shape feature set reaches a first threshold , Providing the body shape image to the body shape image recognition neural network for image recognition enhancement training; when the second similarity between the facial image and the facial feature set reaches a second threshold, the The facial image provides the facial image recognition neural network for image recognition enhancement training.
- the target person determining module 40 includes:
- the time determining sub-module 41 is configured to form the image data whose similarity exceeds the threshold value into an image sequence according to the time course; taking the earliest time point in the image sequence as the appearance time, and taking the image sequence The latest point in time is the disappearance time.
- the position determination sub-module 42 is configured to apply a road network recognition neural network to recognize the first position of the target person in the road network in the image data corresponding to the appearance time; take the first position as the target The direction of appearance.
- the position determination submodule 42 is further configured to apply the road network recognition neural network to recognize the second position of the target person in the road network in the image data corresponding to the disappearance time;
- the two directions are the disappearing directions.
- the time determination submodule 41 includes:
- the body shape image sequence unit 411 is configured to form a body shape image sequence according to the time course of the body shape image data with the first similarity reaching a first threshold.
- the facial image sequence unit 412 is configured to form a facial image sequence according to the time course of the facial image data whose second similarity reaches a second threshold.
- the fusion unit 413 is configured to fuse the body shape image sequence and the face image sequence to obtain the image sequence.
- the fusion unit 413 fuses each frame of the body shape image in the body shape image sequence with each frame of the face image in the face image sequence according to the time course; When a node of the time course in the body shape image sequence is missing the body shape image, it is supplemented by the face image of the same node in the face image sequence; in the face image sequence, one said When the face image is missing from the node of the time course, it is supplemented by the body shape image of the same node in the body shape image sequence.
- the device for forming a movement track further includes: a comparison range update module 60, configured to update the location according to the appearance position, the disappearance position, the appearance time, and the disappearance time.
- the image data comparison range is not limited to the image data comparison range.
- the comparison range update module 60 includes:
- the camera searching submodule 61 is configured to search for the at least one camera device in the appearing position and the at least one camera device in the disappearing position in combination with the road network.
- the reverse image data acquisition submodule 62 is configured to use the appearance time as the first time endpoint to obtain reverse image data from the image data storage address of the at least one camera device in the appearance position in a reverse time process.
- the forward image data acquisition sub-module 63 is configured to acquire forward image data from the image data storage address of the at least one camera device in the disappearing direction with the disappearing time as the second time endpoint.
- the image data update submodule 64 updates the image data comparison range with the reverse image data and the forward image data.
- FIG. 12 is a block diagram of the basic structure of the computer device 100 in an embodiment of the application.
- the computer device 100 includes a memory 101, a processor 102, and a network interface 103 that are communicatively connected to each other through a system bus. It should be pointed out that FIG. 12 only shows the computer device 100 with the components 101-103, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead.
- the computer device here is a device that can automatically perform numerical calculation and/or information processing in accordance with pre-set or stored instructions. Its hardware includes but is not limited to microprocessors and application specific integrated circuits. (Application Specific Integrated Circuit, ASIC), Programmable Gate Array (Field-Programmable Gate Array, FPGA), Digital Processor (Digital Signal Processor, DSP), embedded devices, etc.
- the computer device may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
- the computer device can interact with the user through a keyboard, a mouse, a remote control, a touch panel, or a voice control device.
- the memory 101 includes at least one type of readable storage medium.
- the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static Random access memory (SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disks, optical disks, etc.
- the memory 101 may be an internal storage unit of the computer device 100, such as a hard disk or memory of the computer device 100.
- the memory 101 may also be an external storage device of the computer device 100, such as a plug-in hard disk, a smart media card (SMC), and a secure digital device equipped on the computer device 100.
- SMC smart media card
- the memory 101 may also include both an internal storage unit of the computer device 100 and an external storage device thereof.
- the memory 101 is generally used to store an operating system and various application software installed in the computer device 100, for example, the computer-readable instructions of the aforementioned method for forming a movement track.
- the memory 101 can also be used to temporarily store various types of data that have been output or will be output.
- the processor 102 may be a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, a microprocessor, or other data processing chips in some embodiments.
- the processor 102 is generally used to control the overall operation of the computer device 100.
- the processor 102 is configured to run computer-readable instructions or process data stored in the memory 101, for example, run the computer-readable instructions of the aforementioned method of forming a movement track.
- the network interface 103 may include a wireless network interface or a wired network interface.
- the network interface 103 is generally used to establish a communication connection between the computer device 100 and other electronic devices.
- This application also provides another implementation manner, that is, one or more non-volatile readable storage media are provided, and the non-volatile readable storage medium stores a document information entry process, and the document information entry process can be Is executed by at least one processor, so that the at least one processor executes the steps of any one of the foregoing methods for forming a movement track.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (20)
- 一种形成移动轨迹的方法,用于形成目标人物的移动轨迹,其特征在于,包括:获取所述目标人物的形貌特征样本;从所述形貌特征样本中提取图像特征集;在路网的图像数据比对范围内提取图像数据与所述图像特征集进行比对;当有所述图像数据与所述图像特征集的相似度超过阈值时,确定所述目标人物在所述图像数据比对范围内的出现方位、消失方位、出现时间以及消失时间;将所述出现方位、所述消失方位、所述出现时间以及所述消失时间标注到所述路网中形成所述目标人物的移动轨迹。
- 根据权利要求1所述形成移动轨迹的方法,其特征在于,所述图像数据包括体形图像数据和脸部图像数据,所述图像特征集包括体形特征集和脸部特征集,所述在图像数据比对范围内提取图像数据与所述图像特征集进行比对的步骤包括:应用体形图像识别神经网络将所述体形图像数据中的各帧体形图像与所述体形特征集进行比对;计算所述各帧体形图像与所述体形特征集的第一相似度;应用脸部图像识别神经网络将所述脸部图像数据中的各帧脸部图像与所述脸部特征集进行比对;计算所述各帧脸部图像与所述脸部特征集的第二相似度。
- 根据权利要求2所述形成移动轨迹的方法,其特征在于,所述在图像数据比对范围内提取图像数据与所述图像特征集进行比对的步骤还包括:当所述体形图像与所述体形特征集的所述第一相似度达到第一阈值时,将所述体形图像提供给所述体形图像识别神经网络进行图像识别增强训练;当所述脸部图像与所述脸部特征集的所述第二相似度达到第二阈值时,将所述脸部图像提供所述脸部图像识别神经网络进行图像识别增强训练。
- 根据权利要求1所述形成移动轨迹的方法,其特征在于,所述确定所述目标人物在所述图像数据比对范围内的出现方位、消失方位、出现时间以及消失时间的步骤包括:将所述相似度超过所述阈值的所述图像数据按照时间进程形成图像序列;以所述图像序列中最早的时间点为所述出现时间,以所述图像序列中最晚的时 间点为所述消失时间;应用路网识别神经网络识别与所述出现时间对应的所述图像数据中所述目标人物在所述路网中的第一方位;以所述第一方位为所述出现方位;应用所述路网识别神经网络识别与所述消失时间对应的所述图像数据中所述目标人物在所述路网中的第二方位;以所述第二方位为所述消失方位。
- 根据权利要求4所述形成移动轨迹的方法,其特征在于,所述图像数据包括体形图像数据和脸部图像数据,所述相似度包括第一相似度和第二相似度;所述阈值包括第一阈值和第二阀值;所述将所述相似度超过所述阈值的所述图像数据按照时间进程形成图像序列的步骤包括:将所述第一相似度达到所述第一阈值的所述体形图像数据按照时间进程形成体形图像序列;将所述第二相似度达到所述第二阈值的所述脸部图像数据按照时间进程形成脸部图像序列;将所述体形图像序列与所述脸部图像序列进行融合得到所述图像序列。
- 根据权利要求5所述形成移动轨迹的方法,其特征在于,所述将所述体形图像序列与所述脸部图像序列进行融合得到所述图像序列的步骤包括:按照所述时间进程将所述体形图像序列中的各帧体形图像与所述脸部图像序列中的各帧脸部图像进行融合;在所述体形图像序列中的一个所述时间进程的节点缺失所述体形图像时,通过所述脸部图像序列中相同所述节点的脸部图像进行补充;在所述脸部图像序列中一个所述时间进程的节点缺失所述脸部图像时,通过所述体形图像序列中相同所述节点的体形图像进行补充。
- 根据权利要求1所述形成移动轨迹的方法,其特征在于,所述确定所述目标人物在所述图像数据比对范围内的出现方位、消失方位、出现时间以及消失时间的步骤之后,所述方法还包括步骤:根据所述出现方位、所述消失方位、所述出现时间以及所述消失时间更新所述图像数据比对范围:结合所述路网搜索所述出现方位的至少一个摄像装置以及所述消失方位的至少一个摄像装置;以所述出现时间为第一时间端点从所述出现方位的至少一个摄像装置的图像数据存储地址逆时间进程获取逆向图像数据;以所述消失时间为第二时间端点从所述消失方位的至少一个摄像装置的图像数据存储地址顺时间进程获取顺向图像数据;用所述逆向图像数据和所述顺向图像数据更新所述图像数据比对范围。
- 一种形成移动轨迹的装置,用于形成目标人物的移动轨迹,其特征在于,包括:形貌特征样本获取模块,用于获取所述目标人物的形貌特征样本;图像特征集提取模块,用于从所述形貌特征样本中提取图像特征集;比对模块,用于在路网的图像数据比对范围内提取图像数据与所述图像特征集进行比对;目标人物确定模块,用于当有所述图像数据与所述图像特征集的相似度超过阈值时,确定所述目标人物在所述图像数据比对范围内的出现方位、消失方位、出现时间以及消失时间;移动轨迹标注模块,用于将所述出现方位、所述消失方位、所述出现时间以及所述消失时间标注到所述路网中形成所述目标人物的移动轨迹。
- 一种计算机设备,包括存储器和处理器,其特征在于,所述存储器中存储有计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:获取所述目标人物的形貌特征样本;从所述形貌特征样本中提取图像特征集;在路网的图像数据比对范围内提取图像数据与所述图像特征集进行比对;当有所述图像数据与所述图像特征集的相似度超过阈值时,确定所述目标人物在所述图像数据比对范围内的出现方位、消失方位、出现时间以及消失时间;将所述出现方位、所述消失方位、所述出现时间以及所述消失时间标注到所述路网中形成所述目标人物的移动轨迹。
- 根据权利要求9所述的计算机设备,其特征在于,所述图像数据包括体形图像数据和脸部图像数据,所述图像特征集包括体形特征集和脸部特征集,所述在图像数据比对范围内提取图像数据与所述图像特征集进行比对的步骤包括:应用体形图像识别神经网络将所述体形图像数据中的各帧体形图像与所述体形特征集进行比对;计算所述各帧体形图像与所述体形特征集的第一相似度;应用脸部图像识别神经网络将所述脸部图像数据中的各帧脸部图像与所述 脸部特征集进行比对;计算所述各帧脸部图像与所述脸部特征集的第二相似度。
- 根据权利要求10所述的计算机设备,其特征在于,所述在图像数据比对范围内提取图像数据与所述图像特征集进行比对的步骤还包括:当所述体形图像与所述体形特征集的所述第一相似度达到第一阈值时,将所述体形图像提供给所述体形图像识别神经网络进行图像识别增强训练;当所述脸部图像与所述脸部特征集的所述第二相似度达到第二阈值时,将所述脸部图像提供所述脸部图像识别神经网络进行图像识别增强训练。
- 根据权利要求9所述的计算机设备,其特征在于,所述确定所述目标人物在所述图像数据比对范围内的出现方位、消失方位、出现时间以及消失时间的步骤包括:将所述相似度超过所述阈值的所述图像数据按照时间进程形成图像序列;以所述图像序列中最早的时间点为所述出现时间,以所述图像序列中最晚的时间点为所述消失时间;应用路网识别神经网络识别与所述出现时间对应的所述图像数据中所述目标人物在所述路网中的第一方位;以所述第一方位为所述出现方位;应用所述路网识别神经网络识别与所述消失时间对应的所述图像数据中所述目标人物在所述路网中的第二方位;以所述第二方位为所述消失方位。
- 根据权利要求12所述的计算机设备,其特征在于,所述图像数据包括体形图像数据和脸部图像数据,所述相似度包括第一相似度和第二相似度;所述阈值包括第一阈值和第二阀值;所述将所述相似度超过所述阈值的所述图像数据按照时间进程形成图像序列的步骤包括:将所述第一相似度达到所述第一阈值的所述体形图像数据按照时间进程形成体形图像序列;将所述第二相似度达到所述第二阈值的所述脸部图像数据按照时间进程形成脸部图像序列;将所述体形图像序列与所述脸部图像序列进行融合得到所述图像序列。
- 根据权利要求13所述的计算机设备,其特征在于,所述将所述体形图像序列与所述脸部图像序列进行融合得到所述图像序列的步骤包括:按照所述时间进程将所述体形图像序列中的各帧体形图像与所述脸部图像序列中的各帧脸部图像进行融合;在所述体形图像序列中的一个所述时间进程的节点缺失所述体形图像时, 通过所述脸部图像序列中相同所述节点的脸部图像进行补充;在所述脸部图像序列中一个所述时间进程的节点缺失所述脸部图像时,通过所述体形图像序列中相同所述节点的体形图像进行补充。
- 一个或多个非易失性可读存储介质,其特征在于,所述非易失性可读存储介质上存储有计算机可读指令,所述计算机可读指令被一种处理器执行时,使得所述处理器执行如下步骤:获取所述目标人物的形貌特征样本;从所述形貌特征样本中提取图像特征集;在路网的图像数据比对范围内提取图像数据与所述图像特征集进行比对;当有所述图像数据与所述图像特征集的相似度超过阈值时,确定所述目标人物在所述图像数据比对范围内的出现方位、消失方位、出现时间以及消失时间;将所述出现方位、所述消失方位、所述出现时间以及所述消失时间标注到所述路网中形成所述目标人物的移动轨迹。
- 根据权利要求15所述的非易失性可读存储介质,其特征在于,所述图像数据包括体形图像数据和脸部图像数据,所述图像特征集包括体形特征集和脸部特征集,所述在图像数据比对范围内提取图像数据与所述图像特征集进行比对的步骤包括:应用体形图像识别神经网络将所述体形图像数据中的各帧体形图像与所述体形特征集进行比对;计算所述各帧体形图像与所述体形特征集的第一相似度;应用脸部图像识别神经网络将所述脸部图像数据中的各帧脸部图像与所述脸部特征集进行比对;计算所述各帧脸部图像与所述脸部特征集的第二相似度。
- 根据权利要求16所述的非易失性可读存储介质,其特征在于,所述在图像数据比对范围内提取图像数据与所述图像特征集进行比对的步骤还包括:当所述体形图像与所述体形特征集的所述第一相似度达到第一阈值时,将所述体形图像提供给所述体形图像识别神经网络进行图像识别增强训练;当所述脸部图像与所述脸部特征集的所述第二相似度达到第二阈值时,将所述脸部图像提供所述脸部图像识别神经网络进行图像识别增强训练。
- 根据权利要求15所述的非易失性可读存储介质,其特征在于,所述确定所述目标人物在所述图像数据比对范围内的出现方位、消失方位、出现时间以及消失时间的步骤包括:将所述相似度超过所述阈值的所述图像数据按照时间进程形成图像序列;以所述图像序列中最早的时间点为所述出现时间,以所述图像序列中最晚的时间点为所述消失时间;应用路网识别神经网络识别与所述出现时间对应的所述图像数据中所述目标人物在所述路网中的第一方位;以所述第一方位为所述出现方位;应用所述路网识别神经网络识别与所述消失时间对应的所述图像数据中所述目标人物在所述路网中的第二方位;以所述第二方位为所述消失方位。
- 根据权利要求18所述的非易失性可读存储介质,其特征在于,所述图像数据包括体形图像数据和脸部图像数据,所述相似度包括第一相似度和第二相似度;所述阈值包括第一阈值和第二阀值;所述将所述相似度超过所述阈值的所述图像数据按照时间进程形成图像序列的步骤包括:将所述第一相似度达到所述第一阈值的所述体形图像数据按照时间进程形成体形图像序列;将所述第二相似度达到所述第二阈值的所述脸部图像数据按照时间进程形成脸部图像序列;将所述体形图像序列与所述脸部图像序列进行融合得到所述图像序列。
- 根据权利要求19所述的非易失性可读存储介质,其特征在于,所述将所述体形图像序列与所述脸部图像序列进行融合得到所述图像序列的步骤包括:按照所述时间进程将所述体形图像序列中的各帧体形图像与所述脸部图像序列中的各帧脸部图像进行融合;在所述体形图像序列中的一个所述时间进程的节点缺失所述体形图像时,通过所述脸部图像序列中相同所述节点的脸部图像进行补充;在所述脸部图像序列中一个所述时间进程的节点缺失所述脸部图像时,通过所述体形图像序列中相同所述节点的体形图像进行补充。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910377560.1A CN110276244B (zh) | 2019-05-07 | 2019-05-07 | 形成移动轨迹的方法、装置、计算机设备及存储介质 |
CN201910377560.1 | 2019-05-07 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020224116A1 true WO2020224116A1 (zh) | 2020-11-12 |
Family
ID=67959802
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/103180 WO2020224116A1 (zh) | 2019-05-07 | 2019-08-29 | 形成移动轨迹的方法、装置、计算机设备及存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110276244B (zh) |
WO (1) | WO2020224116A1 (zh) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107644204A (zh) * | 2017-09-12 | 2018-01-30 | 南京凌深信息科技有限公司 | 一种用于安防系统的人体识别与跟踪方法 |
CN107909025A (zh) * | 2017-11-13 | 2018-04-13 | 毛国强 | 基于视频和无线监控的人物识别及追踪方法和系统 |
CN109194619A (zh) * | 2018-08-06 | 2019-01-11 | 湖南深纳数据有限公司 | 一种应用于智慧安防的大数据服务系统 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101163940B (zh) * | 2005-04-25 | 2013-07-24 | 株式会社吉奥技术研究所 | 摄影位置分析方法 |
CN107423674A (zh) * | 2017-05-15 | 2017-12-01 | 广东数相智能科技有限公司 | 一种基于人脸识别的寻人方法、电子设备及存储介质 |
CN108288025A (zh) * | 2017-12-22 | 2018-07-17 | 深圳云天励飞技术有限公司 | 一种车载视频监控方法、装置及设备 |
-
2019
- 2019-05-07 CN CN201910377560.1A patent/CN110276244B/zh active Active
- 2019-08-29 WO PCT/CN2019/103180 patent/WO2020224116A1/zh active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107644204A (zh) * | 2017-09-12 | 2018-01-30 | 南京凌深信息科技有限公司 | 一种用于安防系统的人体识别与跟踪方法 |
CN107909025A (zh) * | 2017-11-13 | 2018-04-13 | 毛国强 | 基于视频和无线监控的人物识别及追踪方法和系统 |
CN109194619A (zh) * | 2018-08-06 | 2019-01-11 | 湖南深纳数据有限公司 | 一种应用于智慧安防的大数据服务系统 |
Also Published As
Publication number | Publication date |
---|---|
CN110276244B (zh) | 2024-04-09 |
CN110276244A (zh) | 2019-09-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11455735B2 (en) | Target tracking method, device, system and non-transitory computer readable storage medium | |
WO2020042419A1 (zh) | 基于步态的身份识别方法、装置、电子设备 | |
WO2020038136A1 (zh) | 面部识别方法、装置、电子设备及计算机可读介质 | |
CN108446585B (zh) | 目标跟踪方法、装置、计算机设备和存储介质 | |
WO2021103721A1 (zh) | 基于部件分割的识别模型训练、车辆重识别方法及装置 | |
Zeng et al. | Silhouette-based gait recognition via deterministic learning | |
CN109145742B (zh) | 一种行人识别方法及系统 | |
US10762644B1 (en) | Multiple object tracking in video by combining neural networks within a bayesian framework | |
CN109657533A (zh) | 行人重识别方法及相关产品 | |
CN110705478A (zh) | 人脸跟踪方法、装置、设备及存储介质 | |
US9911053B2 (en) | Information processing apparatus, method for tracking object and program storage medium | |
KR102132722B1 (ko) | 영상 내 다중 객체 추적 방법 및 시스템 | |
CN109492576B (zh) | 图像识别方法、装置及电子设备 | |
CN110009060B (zh) | 一种基于相关滤波与目标检测的鲁棒性长期跟踪方法 | |
CN108898623A (zh) | 目标跟踪方法及设备 | |
CN113608663B (zh) | 一种基于深度学习和k-曲率法的指尖跟踪方法 | |
JP2022082493A (ja) | ノイズチャネルに基づくランダム遮蔽回復の歩行者再識別方法 | |
Bhuyan et al. | Trajectory guided recognition of hand gestures having only global motions | |
CN112541403A (zh) | 一种利用红外摄像头的室内人员跌倒检测方法 | |
CN112329602A (zh) | 人脸标注图像的获取方法及装置、电子设备、存储介质 | |
CN109858464B (zh) | 底库数据处理方法、人脸识别方法、装置和电子设备 | |
CN114998628A (zh) | 基于模板匹配的孪生网络长时目标跟踪方法 | |
CN113989929A (zh) | 人体动作识别方法、装置、电子设备及计算机可读介质 | |
CN113989914B (zh) | 一种基于人脸识别的安防监控方法及系统 | |
WO2020224116A1 (zh) | 形成移动轨迹的方法、装置、计算机设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19927646 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19927646 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01.06.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19927646 Country of ref document: EP Kind code of ref document: A1 |