CN102456225B - Video monitoring system and moving target detecting and tracking method thereof - Google Patents
Video monitoring system and moving target detecting and tracking method thereof Download PDFInfo
- Publication number
- CN102456225B CN102456225B CN201010515055.8A CN201010515055A CN102456225B CN 102456225 B CN102456225 B CN 102456225B CN 201010515055 A CN201010515055 A CN 201010515055A CN 102456225 B CN102456225 B CN 102456225B
- Authority
- CN
- China
- Prior art keywords
- target
- image
- moving
- point
- moving target
- Prior art date
Links
- 230000001131 transforming Effects 0.000 claims description 24
- 239000011159 matrix materials Substances 0.000 claims description 17
- 239000000284 extracts Substances 0.000 claims description 16
- 238000001914 filtration Methods 0.000 claims description 14
- 238000000034 methods Methods 0.000 claims description 11
- 230000000875 corresponding Effects 0.000 claims description 8
- 238000006073 displacement reactions Methods 0.000 claims description 7
- 238000009499 grossing Methods 0.000 claims description 6
- 230000015572 biosynthetic process Effects 0.000 claims description 2
- 238000005755 formation reactions Methods 0.000 claims description 2
- 238000000605 extraction Methods 0.000 abstract description 5
- 238000006243 chemical reactions Methods 0.000 abstract description 2
- 238000010586 diagrams Methods 0.000 description 4
- 230000001808 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reactions Methods 0.000 description 3
- 238000005516 engineering processes Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 125000004122 cyclic group Chemical group 0.000 description 2
- CWRVKFFCRWGWCS-UHFFFAOYSA-N Pentrazole Chemical compound data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0nMS4wJyBlbmNvZGluZz0naXNvLTg4NTktMSc/Pgo8c3ZnIHZlcnNpb249JzEuMScgYmFzZVByb2ZpbGU9J2Z1bGwnCiAgICAgICAgICAgICAgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJwogICAgICAgICAgICAgICAgICAgICAgeG1sbnM6cmRraXQ9J2h0dHA6Ly93d3cucmRraXQub3JnL3htbCcKICAgICAgICAgICAgICAgICAgICAgIHhtbG5zOnhsaW5rPSdodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rJwogICAgICAgICAgICAgICAgICB4bWw6c3BhY2U9J3ByZXNlcnZlJwp3aWR0aD0nMzAwcHgnIGhlaWdodD0nMzAwcHgnIHZpZXdCb3g9JzAgMCAzMDAgMzAwJz4KPCEtLSBFTkQgT0YgSEVBREVSIC0tPgo8cmVjdCBzdHlsZT0nb3BhY2l0eToxLjA7ZmlsbDojRkZGRkZGO3N0cm9rZTpub25lJyB3aWR0aD0nMzAwJyBoZWlnaHQ9JzMwMCcgeD0nMCcgeT0nMCc+IDwvcmVjdD4KPHBhdGggY2xhc3M9J2JvbmQtMCcgZD0nTSAxODcuMTgzLDIyOS4wOTQgTCAyNTUuODE4LDIxMy40MjknIHN0eWxlPSdmaWxsOm5vbmU7ZmlsbC1ydWxlOmV2ZW5vZGQ7c3Ryb2tlOiMzQjQxNDM7c3Ryb2tlLXdpZHRoOjJweDtzdHJva2UtbGluZWNhcDpidXR0O3N0cm9rZS1saW5lam9pbjptaXRlcjtzdHJva2Utb3BhY2l0eToxJyAvPgo8cGF0aCBjbGFzcz0nYm9uZC05JyBkPSdNIDE4Ny4xODMsMjI5LjA5NCBMIDE2OC4yNjcsMjE0LjAwOScgc3R5bGU9J2ZpbGw6bm9uZTtmaWxsLXJ1bGU6ZXZlbm9kZDtzdHJva2U6IzNCNDE0MztzdHJva2Utd2lkdGg6MnB4O3N0cm9rZS1saW5lY2FwOmJ1dHQ7c3Ryb2tlLWxpbmVqb2luOm1pdGVyO3N0cm9rZS1vcGFjaXR5OjEnIC8+CjxwYXRoIGNsYXNzPSdib25kLTknIGQ9J00gMTY4LjI2NywyMTQuMDA5IEwgMTQ5LjM1MSwxOTguOTI1JyBzdHlsZT0nZmlsbDpub25lO2ZpbGwtcnVsZTpldmVub2RkO3N0cm9rZTojNDI4NEY0O3N0cm9rZS13aWR0aDoycHg7c3Ryb2tlLWxpbmVjYXA6YnV0dDtzdHJva2UtbGluZWpvaW46bWl0ZXI7c3Ryb2tlLW9wYWNpdHk6MScgLz4KPHBhdGggY2xhc3M9J2JvbmQtMScgZD0nTSAyNTUuODE4LDIxMy40MjkgTCAyODYuMzY0LDE1MCcgc3R5bGU9J2ZpbGw6bm9uZTtmaWxsLXJ1bGU6ZXZlbm9kZDtzdHJva2U6IzNCNDE0MztzdHJva2Utd2lkdGg6MnB4O3N0cm9rZS1saW5lY2FwOmJ1dHQ7c3Ryb2tlLWxpbmVqb2luOm1pdGVyO3N0cm9rZS1vcGFjaXR5OjEnIC8+CjxwYXRoIGNsYXNzPSdib25kLTInIGQ9J00gMjg2LjM2NCwxNTAgTCAyNTUuODE4LDg2LjU3MTQnIHN0eWxlPSdmaWxsOm5vbmU7ZmlsbC1ydWxlOmV2ZW5vZGQ7c3Ryb2tlOiMzQjQxNDM7c3Ryb2tlLXdpZHRoOjJweDtzdHJva2UtbGluZWNhcDpidXR0O3N0cm9rZS1saW5lam9pbjptaXRlcjtzdHJva2Utb3BhY2l0eToxJyAvPgo8cGF0aCBjbGFzcz0nYm9uZC0zJyBkPSdNIDI1NS44MTgsODYuNTcxNCBMIDE4Ny4xODMsNzAuOTA1OCcgc3R5bGU9J2ZpbGw6bm9uZTtmaWxsLXJ1bGU6ZXZlbm9kZDtzdHJva2U6IzNCNDE0MztzdHJva2Utd2lkdGg6MnB4O3N0cm9rZS1saW5lY2FwOmJ1dHQ7c3Ryb2tlLWxpbmVqb2luOm1pdGVyO3N0cm9rZS1vcGFjaXR5OjEnIC8+CjxwYXRoIGNsYXNzPSdib25kLTQnIGQ9J00gMTg3LjE4Myw3MC45MDU4IEwgMTMyLjE0MSwxMTQuOCcgc3R5bGU9J2ZpbGw6bm9uZTtmaWxsLXJ1bGU6ZXZlbm9kZDtzdHJva2U6IzNCNDE0MztzdHJva2Utd2lkdGg6MnB4O3N0cm9rZS1saW5lY2FwOmJ1dHQ7c3Ryb2tlLWxpbmVqb2luOm1pdGVyO3N0cm9rZS1vcGFjaXR5OjEnIC8+CjxwYXRoIGNsYXNzPSdib25kLTUnIGQ9J00gMTMyLjE0MSwxMTQuOCBMIDEwNy4yNjksMTA2LjcxOCcgc3R5bGU9J2ZpbGw6bm9uZTtmaWxsLXJ1bGU6ZXZlbm9kZDtzdHJva2U6IzNCNDE0MztzdHJva2Utd2lkdGg6MnB4O3N0cm9rZS1saW5lY2FwOmJ1dHQ7c3Ryb2tlLWxpbmVqb2luOm1pdGVyO3N0cm9rZS1vcGFjaXR5OjEnIC8+CjxwYXRoIGNsYXNzPSdib25kLTUnIGQ9J00gMTA3LjI2OSwxMDYuNzE4IEwgODIuMzk2NSw5OC42MzY3JyBzdHlsZT0nZmlsbDpub25lO2ZpbGwtcnVsZTpldmVub2RkO3N0cm9rZTojNDI4NEY0O3N0cm9rZS13aWR0aDoycHg7c3Ryb2tlLWxpbmVjYXA6YnV0dDtzdHJva2UtbGluZWpvaW46bWl0ZXI7c3Ryb2tlLW9wYWNpdHk6MScgLz4KPHBhdGggY2xhc3M9J2JvbmQtNScgZD0nTSAxMjAuMzI5LDEyNS43NjYgTCAxMDIuOTE4LDEyMC4xMDknIHN0eWxlPSdmaWxsOm5vbmU7ZmlsbC1ydWxlOmV2ZW5vZGQ7c3Ryb2tlOiMzQjQxNDM7c3Ryb2tlLXdpZHRoOjJweDtzdHJva2UtbGluZWNhcDpidXR0O3N0cm9rZS1saW5lam9pbjptaXRlcjtzdHJva2Utb3BhY2l0eToxJyAvPgo8cGF0aCBjbGFzcz0nYm9uZC01JyBkPSdNIDEwMi45MTgsMTIwLjEwOSBMIDg1LjUwNzMsMTE0LjQ1Micgc3R5bGU9J2ZpbGw6bm9uZTtmaWxsLXJ1bGU6ZXZlbm9kZDtzdHJva2U6IzQyODRGNDtzdHJva2Utd2lkdGg6MnB4O3N0cm9rZS1saW5lY2FwOmJ1dHQ7c3Ryb2tlLWxpbmVqb2luOm1pdGVyO3N0cm9rZS1vcGFjaXR5OjEnIC8+CjxwYXRoIGNsYXNzPSdib25kLTEwJyBkPSdNIDEzMi4xNDEsMTE0LjggTCAxMzIuMTQxLDE0MC42MTMnIHN0eWxlPSdmaWxsOm5vbmU7ZmlsbC1ydWxlOmV2ZW5vZGQ7c3Ryb2tlOiMzQjQxNDM7c3Ryb2tlLXdpZHRoOjJweDtzdHJva2UtbGluZWNhcDpidXR0O3N0cm9rZS1saW5lam9pbjptaXRlcjtzdHJva2Utb3BhY2l0eToxJyAvPgo8cGF0aCBjbGFzcz0nYm9uZC0xMCcgZD0nTSAxMzIuMTQxLDE0MC42MTMgTCAxMzIuMTQxLDE2Ni40MjcnIHN0eWxlPSdmaWxsOm5vbmU7ZmlsbC1ydWxlOmV2ZW5vZGQ7c3Ryb2tlOiM0Mjg0RjQ7c3Ryb2tlLXdpZHRoOjJweDtzdHJva2UtbGluZWNhcDpidXR0O3N0cm9rZS1saW5lam9pbjptaXRlcjtzdHJva2Utb3BhY2l0eToxJyAvPgo8cGF0aCBjbGFzcz0nYm9uZC02JyBkPSdNIDUxLjU0NjksMTExLjgxOCBMIDM3LjQ0NiwxMzEuMjI3JyBzdHlsZT0nZmlsbDpub25lO2ZpbGwtcnVsZTpldmVub2RkO3N0cm9rZTojNDI4NEY0O3N0cm9rZS13aWR0aDoycHg7c3Ryb2tlLWxpbmVjYXA6YnV0dDtzdHJva2UtbGluZWpvaW46bWl0ZXI7c3Ryb2tlLW9wYWNpdHk6MScgLz4KPHBhdGggY2xhc3M9J2JvbmQtNycgZD0nTSAzNy40NDYsMTY4Ljc3MyBMIDUxLjU0NjksMTg4LjE4Micgc3R5bGU9J2ZpbGw6bm9uZTtmaWxsLXJ1bGU6ZXZlbm9kZDtzdHJva2U6IzQyODRGNDtzdHJva2Utd2lkdGg6MnB4O3N0cm9rZS1saW5lY2FwOmJ1dHQ7c3Ryb2tlLWxpbmVqb2luOm1pdGVyO3N0cm9rZS1vcGFjaXR5OjEnIC8+CjxwYXRoIGNsYXNzPSdib25kLTcnIGQ9J00gNTAuOTUyMSwxNjMuNDA5IEwgNjAuODIyOCwxNzYuOTk0JyBzdHlsZT0nZmlsbDpub25lO2ZpbGwtcnVsZTpldmVub2RkO3N0cm9rZTojNDI4NEY0O3N0cm9rZS13aWR0aDoycHg7c3Ryb2tlLWxpbmVjYXA6YnV0dDtzdHJva2UtbGluZWpvaW46bWl0ZXI7c3Ryb2tlLW9wYWNpdHk6MScgLz4KPHBhdGggY2xhc3M9J2JvbmQtOCcgZD0nTSA4Mi4zOTY1LDIwMS4zNjMgTCAxMTQuOTMxLDE5MC43OTInIHN0eWxlPSdmaWxsOm5vbmU7ZmlsbC1ydWxlOmV2ZW5vZGQ7c3Ryb2tlOiM0Mjg0RjQ7c3Ryb2tlLXdpZHRoOjJweDtzdHJva2UtbGluZWNhcDpidXR0O3N0cm9rZS1saW5lam9pbjptaXRlcjtzdHJva2Utb3BhY2l0eToxJyAvPgo8dGV4dCBkb21pbmFudC1iYXNlbGluZT0iY2VudHJhbCIgdGV4dC1hbmNob3I9Im1pZGRsZSIgeD0nNjUuMTg2NicgeT0nOTYuNTY0OScgc3R5bGU9J2ZvbnQtc2l6ZToyM3B4O2ZvbnQtc3R5bGU6bm9ybWFsO2ZvbnQtd2VpZ2h0Om5vcm1hbDtmaWxsLW9wYWNpdHk6MTtzdHJva2U6bm9uZTtmb250LWZhbWlseTpzYW5zLXNlcmlmO2ZpbGw6IzQyODRGNCcgPjx0c3Bhbj5OPC90c3Bhbj48L3RleHQ+Cjx0ZXh0IGRvbWluYW50LWJhc2VsaW5lPSJjZW50cmFsIiB0ZXh0LWFuY2hvcj0iZW5kIiB4PSczMS42Mjk1JyB5PScxNTMuNTInIHN0eWxlPSdmb250LXNpemU6MjNweDtmb250LXN0eWxlOm5vcm1hbDtmb250LXdlaWdodDpub3JtYWw7ZmlsbC1vcGFjaXR5OjE7c3Ryb2tlOm5vbmU7Zm9udC1mYW1pbHk6c2Fucy1zZXJpZjtmaWxsOiM0Mjg0RjQnID48dHNwYW4+TjwvdHNwYW4+PC90ZXh0Pgo8dGV4dCBkb21pbmFudC1iYXNlbGluZT0iY2VudHJhbCIgdGV4dC1hbmNob3I9Im1pZGRsZSIgeD0nNjUuMTg2NicgeT0nMjEwLjQ3NScgc3R5bGU9J2ZvbnQtc2l6ZToyM3B4O2ZvbnQtc3R5bGU6bm9ybWFsO2ZvbnQtd2VpZ2h0Om5vcm1hbDtmaWxsLW9wYWNpdHk6MTtzdHJva2U6bm9uZTtmb250LWZhbWlseTpzYW5zLXNlcmlmO2ZpbGw6IzQyODRGNCcgPjx0c3Bhbj5OPC90c3Bhbj48L3RleHQ+Cjx0ZXh0IGRvbWluYW50LWJhc2VsaW5lPSJjZW50cmFsIiB0ZXh0LWFuY2hvcj0ic3RhcnQiIHg9JzEyNC4zMTgnIHk9JzE4OC43Micgc3R5bGU9J2ZvbnQtc2l6ZToyM3B4O2ZvbnQtc3R5bGU6bm9ybWFsO2ZvbnQtd2VpZ2h0Om5vcm1hbDtmaWxsLW9wYWNpdHk6MTtzdHJva2U6bm9uZTtmb250LWZhbWlseTpzYW5zLXNlcmlmO2ZpbGw6IzQyODRGNCcgPjx0c3Bhbj5OPC90c3Bhbj48L3RleHQ+Cjwvc3ZnPgo= data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0nMS4wJyBlbmNvZGluZz0naXNvLTg4NTktMSc/Pgo8c3ZnIHZlcnNpb249JzEuMScgYmFzZVByb2ZpbGU9J2Z1bGwnCiAgICAgICAgICAgICAgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJwogICAgICAgICAgICAgICAgICAgICAgeG1sbnM6cmRraXQ9J2h0dHA6Ly93d3cucmRraXQub3JnL3htbCcKICAgICAgICAgICAgICAgICAgICAgIHhtbG5zOnhsaW5rPSdodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rJwogICAgICAgICAgICAgICAgICB4bWw6c3BhY2U9J3ByZXNlcnZlJwp3aWR0aD0nODVweCcgaGVpZ2h0PSc4NXB4JyB2aWV3Qm94PScwIDAgODUgODUnPgo8IS0tIEVORCBPRiBIRUFERVIgLS0+CjxyZWN0IHN0eWxlPSdvcGFjaXR5OjEuMDtmaWxsOiNGRkZGRkY7c3Ryb2tlOm5vbmUnIHdpZHRoPSc4NScgaGVpZ2h0PSc4NScgeD0nMCcgeT0nMCc+IDwvcmVjdD4KPHBhdGggY2xhc3M9J2JvbmQtMCcgZD0nTSA1Mi41MzUxLDY0LjQxIEwgNzEuOTgxOCw1OS45NzE0JyBzdHlsZT0nZmlsbDpub25lO2ZpbGwtcnVsZTpldmVub2RkO3N0cm9rZTojM0I0MTQzO3N0cm9rZS13aWR0aDoycHg7c3Ryb2tlLWxpbmVjYXA6YnV0dDtzdHJva2UtbGluZWpvaW46bWl0ZXI7c3Ryb2tlLW9wYWNpdHk6MScgLz4KPHBhdGggY2xhc3M9J2JvbmQtOScgZD0nTSA1Mi41MzUxLDY0LjQxIEwgNDYuMTc4Myw1OS4zNDA3JyBzdHlsZT0nZmlsbDpub25lO2ZpbGwtcnVsZTpldmVub2RkO3N0cm9rZTojM0I0MTQzO3N0cm9rZS13aWR0aDoycHg7c3Ryb2tlLWxpbmVjYXA6YnV0dDtzdHJva2UtbGluZWpvaW46bWl0ZXI7c3Ryb2tlLW9wYWNpdHk6MScgLz4KPHBhdGggY2xhc3M9J2JvbmQtOScgZD0nTSA0Ni4xNzgzLDU5LjM0MDcgTCAzOS44MjE1LDU0LjI3MTMnIHN0eWxlPSdmaWxsOm5vbmU7ZmlsbC1ydWxlOmV2ZW5vZGQ7c3Ryb2tlOiM0Mjg0RjQ7c3Ryb2tlLXdpZHRoOjJweDtzdHJva2UtbGluZWNhcDpidXR0O3N0cm9rZS1saW5lam9pbjptaXRlcjtzdHJva2Utb3BhY2l0eToxJyAvPgo8cGF0aCBjbGFzcz0nYm9uZC0xJyBkPSdNIDcxLjk4MTgsNTkuOTcxNCBMIDgwLjYzNjQsNDInIHN0eWxlPSdmaWxsOm5vbmU7ZmlsbC1ydWxlOmV2ZW5vZGQ7c3Ryb2tlOiMzQjQxNDM7c3Ryb2tlLXdpZHRoOjJweDtzdHJva2UtbGluZWNhcDpidXR0O3N0cm9rZS1saW5lam9pbjptaXRlcjtzdHJva2Utb3BhY2l0eToxJyAvPgo8cGF0aCBjbGFzcz0nYm9uZC0yJyBkPSdNIDgwLjYzNjQsNDIgTCA3MS45ODE4LDI0LjAyODYnIHN0eWxlPSdmaWxsOm5vbmU7ZmlsbC1ydWxlOmV2ZW5vZGQ7c3Ryb2tlOiMzQjQxNDM7c3Ryb2tlLXdpZHRoOjJweDtzdHJva2UtbGluZWNhcDpidXR0O3N0cm9rZS1saW5lam9pbjptaXRlcjtzdHJva2Utb3BhY2l0eToxJyAvPgo8cGF0aCBjbGFzcz0nYm9uZC0zJyBkPSdNIDcxLjk4MTgsMjQuMDI4NiBMIDUyLjUzNTEsMTkuNTknIHN0eWxlPSdmaWxsOm5vbmU7ZmlsbC1ydWxlOmV2ZW5vZGQ7c3Ryb2tlOiMzQjQxNDM7c3Ryb2tlLXdpZHRoOjJweDtzdHJva2UtbGluZWNhcDpidXR0O3N0cm9rZS1saW5lam9pbjptaXRlcjtzdHJva2Utb3BhY2l0eToxJyAvPgo8cGF0aCBjbGFzcz0nYm9uZC00JyBkPSdNIDUyLjUzNTEsMTkuNTkgTCAzNi45NDAxLDMyLjAyNjYnIHN0eWxlPSdmaWxsOm5vbmU7ZmlsbC1ydWxlOmV2ZW5vZGQ7c3Ryb2tlOiMzQjQxNDM7c3Ryb2tlLXdpZHRoOjJweDtzdHJva2UtbGluZWNhcDpidXR0O3N0cm9rZS1saW5lam9pbjptaXRlcjtzdHJva2Utb3BhY2l0eToxJyAvPgo8cGF0aCBjbGFzcz0nYm9uZC01JyBkPSdNIDM2Ljk0MDEsMzIuMDI2NiBMIDI4Ljg5NTUsMjkuNDEyOCcgc3R5bGU9J2ZpbGw6bm9uZTtmaWxsLXJ1bGU6ZXZlbm9kZDtzdHJva2U6IzNCNDE0MztzdHJva2Utd2lkdGg6MnB4O3N0cm9rZS1saW5lY2FwOmJ1dHQ7c3Ryb2tlLWxpbmVqb2luOm1pdGVyO3N0cm9rZS1vcGFjaXR5OjEnIC8+CjxwYXRoIGNsYXNzPSdib25kLTUnIGQ9J00gMjguODk1NSwyOS40MTI4IEwgMjAuODUxLDI2Ljc5OScgc3R5bGU9J2ZpbGw6bm9uZTtmaWxsLXJ1bGU6ZXZlbm9kZDtzdHJva2U6IzQyODRGNDtzdHJva2Utd2lkdGg6MnB4O3N0cm9rZS1saW5lY2FwOmJ1dHQ7c3Ryb2tlLWxpbmVqb2luOm1pdGVyO3N0cm9rZS1vcGFjaXR5OjEnIC8+CjxwYXRoIGNsYXNzPSdib25kLTUnIGQ9J00gMzMuMjkzOSwzNS4wMzY2IEwgMjcuNjYyOCwzMy4yMDY5JyBzdHlsZT0nZmlsbDpub25lO2ZpbGwtcnVsZTpldmVub2RkO3N0cm9rZTojM0I0MTQzO3N0cm9rZS13aWR0aDoycHg7c3Ryb2tlLWxpbmVjYXA6YnV0dDtzdHJva2UtbGluZWpvaW46bWl0ZXI7c3Ryb2tlLW9wYWNpdHk6MScgLz4KPHBhdGggY2xhc3M9J2JvbmQtNScgZD0nTSAyNy42NjI4LDMzLjIwNjkgTCAyMi4wMzE2LDMxLjM3NzInIHN0eWxlPSdmaWxsOm5vbmU7ZmlsbC1ydWxlOmV2ZW5vZGQ7c3Ryb2tlOiM0Mjg0RjQ7c3Ryb2tlLXdpZHRoOjJweDtzdHJva2UtbGluZWNhcDpidXR0O3N0cm9rZS1saW5lam9pbjptaXRlcjtzdHJva2Utb3BhY2l0eToxJyAvPgo8cGF0aCBjbGFzcz0nYm9uZC0xMCcgZD0nTSAzNi45NDAxLDMyLjAyNjYgTCAzNi45NDAxLDQwLjMzNzgnIHN0eWxlPSdmaWxsOm5vbmU7ZmlsbC1ydWxlOmV2ZW5vZGQ7c3Ryb2tlOiMzQjQxNDM7c3Ryb2tlLXdpZHRoOjJweDtzdHJva2UtbGluZWNhcDpidXR0O3N0cm9rZS1saW5lam9pbjptaXRlcjtzdHJva2Utb3BhY2l0eToxJyAvPgo8cGF0aCBjbGFzcz0nYm9uZC0xMCcgZD0nTSAzNi45NDAxLDQwLjMzNzggTCAzNi45NDAxLDQ4LjY0ODknIHN0eWxlPSdmaWxsOm5vbmU7ZmlsbC1ydWxlOmV2ZW5vZGQ7c3Ryb2tlOiM0Mjg0RjQ7c3Ryb2tlLXdpZHRoOjJweDtzdHJva2UtbGluZWNhcDpidXR0O3N0cm9rZS1saW5lam9pbjptaXRlcjtzdHJva2Utb3BhY2l0eToxJyAvPgo8cGF0aCBjbGFzcz0nYm9uZC02JyBkPSdNIDE1LjU1NDIsMjkuMTg3MiBMIDguNjYwNDcsMzguNjc1NScgc3R5bGU9J2ZpbGw6bm9uZTtmaWxsLXJ1bGU6ZXZlbm9kZDtzdHJva2U6IzQyODRGNDtzdHJva2Utd2lkdGg6MnB4O3N0cm9rZS1saW5lY2FwOmJ1dHQ7c3Ryb2tlLWxpbmVqb2luOm1pdGVyO3N0cm9rZS1vcGFjaXR5OjEnIC8+CjxwYXRoIGNsYXNzPSdib25kLTcnIGQ9J00gOC42NjA0Nyw0NS4zMjQ1IEwgMTUuNTU0Miw1NC44MTI4JyBzdHlsZT0nZmlsbDpub25lO2ZpbGwtcnVsZTpldmVub2RkO3N0cm9rZTojNDI4NEY0O3N0cm9rZS13aWR0aDoycHg7c3Ryb2tlLWxpbmVjYXA6YnV0dDtzdHJva2UtbGluZWpvaW46bWl0ZXI7c3Ryb2tlLW9wYWNpdHk6MScgLz4KPHBhdGggY2xhc3M9J2JvbmQtNycgZD0nTSAxMi45MjIsNDQuNDAyOCBMIDE3Ljc0NzYsNTEuMDQ0Nycgc3R5bGU9J2ZpbGw6bm9uZTtmaWxsLXJ1bGU6ZXZlbm9kZDtzdHJva2U6IzQyODRGNDtzdHJva2Utd2lkdGg6MnB4O3N0cm9rZS1saW5lY2FwOmJ1dHQ7c3Ryb2tlLWxpbmVqb2luOm1pdGVyO3N0cm9rZS1vcGFjaXR5OjEnIC8+CjxwYXRoIGNsYXNzPSdib25kLTgnIGQ9J00gMjAuODUxLDU3LjIwMSBMIDM0LjA1ODYsNTIuOTA5Nicgc3R5bGU9J2ZpbGw6bm9uZTtmaWxsLXJ1bGU6ZXZlbm9kZDtzdHJva2U6IzQyODRGNDtzdHJva2Utd2lkdGg6MnB4O3N0cm9rZS1saW5lY2FwOmJ1dHQ7c3Ryb2tlLWxpbmVqb2luOm1pdGVyO3N0cm9rZS1vcGFjaXR5OjEnIC8+Cjx0ZXh0IGRvbWluYW50LWJhc2VsaW5lPSJjZW50cmFsIiB0ZXh0LWFuY2hvcj0ibWlkZGxlIiB4PScxNy45Njk1JyB5PScyNi44Nicgc3R5bGU9J2ZvbnQtc2l6ZTo2cHg7Zm9udC1zdHlsZTpub3JtYWw7Zm9udC13ZWlnaHQ6bm9ybWFsO2ZpbGwtb3BhY2l0eToxO3N0cm9rZTpub25lO2ZvbnQtZmFtaWx5OnNhbnMtc2VyaWY7ZmlsbDojNDI4NEY0JyA+PHRzcGFuPk48L3RzcGFuPjwvdGV4dD4KPHRleHQgZG9taW5hbnQtYmFzZWxpbmU9ImNlbnRyYWwiIHRleHQtYW5jaG9yPSJlbmQiIHg9JzguNDYxNjgnIHk9JzQyLjk5NzMnIHN0eWxlPSdmb250LXNpemU6NnB4O2ZvbnQtc3R5bGU6bm9ybWFsO2ZvbnQtd2VpZ2h0Om5vcm1hbDtmaWxsLW9wYWNpdHk6MTtzdHJva2U6bm9uZTtmb250LWZhbWlseTpzYW5zLXNlcmlmO2ZpbGw6IzQyODRGNCcgPjx0c3Bhbj5OPC90c3Bhbj48L3RleHQ+Cjx0ZXh0IGRvbWluYW50LWJhc2VsaW5lPSJjZW50cmFsIiB0ZXh0LWFuY2hvcj0ibWlkZGxlIiB4PScxNy45Njk1JyB5PSc1OS4xMzQ2JyBzdHlsZT0nZm9udC1zaXplOjZweDtmb250LXN0eWxlOm5vcm1hbDtmb250LXdlaWdodDpub3JtYWw7ZmlsbC1vcGFjaXR5OjE7c3Ryb2tlOm5vbmU7Zm9udC1mYW1pbHk6c2Fucy1zZXJpZjtmaWxsOiM0Mjg0RjQnID48dHNwYW4+TjwvdHNwYW4+PC90ZXh0Pgo8dGV4dCBkb21pbmFudC1iYXNlbGluZT0iY2VudHJhbCIgdGV4dC1hbmNob3I9InN0YXJ0IiB4PSczNC43MjM1JyB5PSc1Mi45NzA3JyBzdHlsZT0nZm9udC1zaXplOjZweDtmb250LXN0eWxlOm5vcm1hbDtmb250LXdlaWdodDpub3JtYWw7ZmlsbC1vcGFjaXR5OjE7c3Ryb2tlOm5vbmU7Zm9udC1mYW1pbHk6c2Fucy1zZXJpZjtmaWxsOiM0Mjg0RjQnID48dHNwYW4+TjwvdHNwYW4+PC90ZXh0Pgo8L3N2Zz4K C1CCCCC2=NN=NN21 CWRVKFFCRWGWCS-UHFFFAOYSA-N 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000010223 real-time analysis Methods 0.000 description 1
- 238000004064 recycling Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000003068 static Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Abstract
Description
Technical field
The present invention relates to technical field of video monitoring, relate in particular to a kind of moving object detection and tracking method and system.
Background technology
In traditional intelligent video monitoring system, CCTV camera mostly is fixed cameras, background image is fixing indefinite, foreground target motion, the problem that this system exists in the time of application is: at certain several position circularly monitoring that set in advance, moving target easily exceeds monitoring visual field scope and can not be to its Continuous Tracking, and these situations make traditional intelligence video monitoring in application, be subject to severely restricts.Flying Camera machine monitoring can overcome the defect of above-mentioned traditional cameras monitoring, and be applied to vehicle-mounted monitoring, PTZ(Pan Tilt Zoom, camera pan-tilt rotates, pitching moves and lens zoom) target following, intelligent robot vision etc., application prospect is boundless, in recent years, the object detecting and tracking technology of motion cameras was subject to domestic and international academia and greatly paid close attention to.
Show according to the domestic and international pertinent literature retrieving at present, because the motion of video camera causes the variation of background, for target detection, conventionally the method adopting is: first estimate the projective transformation parameter between two continuous frames image, again a rear frame and the frame being obtained by the projective transformation of former frame are subtracted each other, obtain static background, finally utilize background subtraction separating method to obtain moving target.How accurate the key of the method is, estimate rapidly projective transformation parameter, conventional evaluation method is to utilize Image gray correlation method, SIFT(Scale Invariant Feature Transform, the conversion of yardstick invariant features) feature, SURF(Speed-Up Robust Feature, the robust features of accelerating) feature etc. asks for continuous videos image character pair point, and then utilize least square method estimated projection transformation matrix parameter, comparatively speaking, SIFT and SURF characterization method are more reliable and more stable in the time asking for images match point, but calculated amount is large simultaneously, be difficult to meet the needs of real-time analysis.
For target following, the normal method adopting is: utilize the characteristic information of CF information as tracked target.Although color characteristic is very useful feature, but when object color is more similar with background color, only utilize color tracking often more difficult, easily cause trail-and-error, some researchers have used the method that various features merges to select feature; Utilize SIFT, SURF feature is comparatively stable as the characteristic information of tracked target, to having stronger adaptability from background color and illumination variation, be the desirable target signature expression way of comparison, weak point is may be difficult to extract invariant feature point or even can not extract unique point for minority target.
Summary of the invention
The object of the present invention is to provide a kind of moving object detection and tracking method and system, overcome at present in the be kept in motion weak point of lower object detecting and tracking technology of video camera, detect quickly and accurately moving target and reliably constantly moving target followed the tracks of, reducing complicacy.
The embodiment of the present invention is achieved in that
A kind of moving object detection and tracking method, is applied to video camera in video monitoring system and, under motion state, comprises:
The step of moving object detection: two two field picture f (t-1), the f (t) to continuous acquisition also carries out Gaussian smoothing, adopt Harris angle point to extract image f (t-1) and f (t) character pair point in conjunction with the mode of HOG descriptor, obtain character pair point set S'; According to the S set of character pair point ', utilize generalized inverse method to try to achieve projective transformation matrix A; Former frame image f (t-1) is obtained to two field picture F according to projective transformation matrix do projective transformation; Two field picture F and a rear two field picture f (t) are done poor to obtain difference image D, then passing threshold is cut apart and is obtained bianry image B; Bianry image B is carried out to filtering operation and obtain all moving targets;
Wherein, adopt Harris angle point to extract image f (t-1) and f (t) character pair point in conjunction with the mode of HOG descriptor, obtain character pair point set S' and further comprise:
A21, extract respectively image f (t-1) and f (t) Harris Corner Feature and set up HOG descriptor for each Harris angle point, formation Corner Feature description vectors (x, y, d); Wherein, x and y are respectively horizontal ordinate and the ordinate of Harris angle point in image, and d is HOG descriptor;
A22, the Corner Feature vector set of image f (t-1) and f (t) is carried out to matching treatment, obtain corners Matching S set={ (x i, y i, d i) → (x ' i, y ' i, d ' i) | i=1,2,3..., m};
A23, utilize Mismatching point in RANSAC algorithm filtering corners Matching S set to obtain character pair point set S', wherein,
S'={ (x i, y i, d i) → (x ' i, y ' i, d ' i) | i=1,2,3..., n} and n≤m;
Set up the step of moving target model: for each moving target, calculate position and the size of this moving target, from current frame image, pluck out accordingly target image, calculate the HOG descriptor of this target image extreme point, Harris angle point and each unique point; The corresponding moving target information of obtaining according to detection process of moving target is set up moving target model, and the mode that adopts position, size, direction of motion, displacement, Harris angle point, extreme point and the HOG descriptor of moving target in image to combine in this model represents moving target feature; Wherein, moving target model definition is a={h, w, area, d, desc, track}, wherein, h represents target width, w represents target length, area represents target area, and d represents target circularity, and desc is the current HOG set of eigenvectors of target, and track is target trajectory set;
The step of motion target tracking: for each moving target, go out position and the size of this moving target in current frame image according to its moving target model assessment, and pluck out respective image as estimating target image f from current frame image; The eigenwert of calculating estimating target image f, comprises Harris angle point and extreme point and HOG descriptor thereof, obtains the characteristic information of estimating target image; The feature of estimating target image and former target image is mated, if the match is successful, upgrade the information of this moving target model, otherwise delete the information of this moving target model.
Preferably, in step a22, the condition of corners Matching is: for the arbitrary angle point (x in image f (t-1) i, y i, d i) with image f (t) in arbitrary angle point (x' j, y' j, d ' j), if | d i-d' j|=arg min{|d i-d ' 1|, | d i-d' 2| ..., | d i-d' n|, judge angle point (x i, y i, d i) and angle point (x' j, y' j, d' j) match.
Preferably, bianry image B being carried out to filtering operation obtains all moving targets and further comprises:
A61, bianry image B is carried out to burn into expansive working, take out interference noise point and cavity;
A62, on bianry image B, extract all moving targets, calculate respectively barycenter, length and width, target circularity, the target area information of each moving target and all moving target information is saved to moving target chained list.
Preferably, after step a62, also comprise: the pseudo-target in a63, removal moving target chained list.
Preferably, estimate position and the size of this moving target at current frame image according to moving target model information, and it plucked out from current frame image obtain estimating target image f and further comprise:
C11, position (x' according to following formula estimating motion target in current frame image i, y' i), length w' and width h',
x' i=x i-1+Δx,y' i=y i-1+Δy,h'=h+k·Δy,w'=w+k·Δx,
Wherein,
C12, according to parameter (x' i, y' i), h', w' pluck out moving target and obtain estimating target image f from current frame image.
According to another aspect of the present invention, a kind of moving object detection and tracking system providing, be applied in the video monitoring system of video camera under being kept in motion, this system comprises moving object detection module, moving target model building module and motion target tracking module, wherein:
Moving object detection module: for two two field picture f (t-1), f (t) to continuous acquisition and carry out Gaussian smoothing, adopt Harris angle point to extract image f (t-1) and f (t) character pair point in conjunction with the mode of HOG descriptor, obtain character pair point set S'; According to the S set of character pair point ', utilize generalized inverse method to try to achieve projective transformation matrix A; Former frame image f (t-1) is obtained to image F according to projective transformation matrix do projective transformation; Two field picture and a rear two field picture f (t) are done poor to obtain difference image D, then passing threshold is cut apart and is obtained bianry image B; Bianry image B is carried out to filtering operation and obtain all moving targets;
Wherein, adopt Harris angle point to extract image f (t-1) and f (t) character pair point in conjunction with the mode of HOG descriptor, obtaining character pair point set S' specifically comprises: extract respectively the Harris Corner Feature of image f (t-1) and f (t) and set up HOG descriptor for each Harris angle point, form Corner Feature description vectors (x, y, d); Wherein, x and y are respectively horizontal ordinate and the ordinate of Harris angle point in image, and d is HOG descriptor; The Corner Feature vector set of image f (t-1) and f (t) is carried out to matching treatment, obtain corners Matching S set={ (x i, y i, d i) → (x ' i, y ' i, d ' i) | i=1,2,3..., m}; Utilize the Mismatching point in RANSAC algorithm filtering corners Matching S set to obtain character pair point set S', wherein,
S'={ (x i, y i, d i) → (x ' i, y ' i, d ' i) | i=1,2,3..., n} and n≤m;
Set up moving target model module: for for each moving target, calculate position and the size of this moving target, from current frame image, pluck out accordingly target image, calculate the HOG descriptor of this target image extreme point, Harris angle point and each unique point; The corresponding moving target information of obtaining according to detection process of moving target is set up moving target model, and the mode that adopts position, size, direction of motion, displacement, Harris angle point, extreme point and the HOG descriptor of moving target in image to combine in this model represents moving target feature; Wherein, moving target model definition is a={h, w, area, d, desc, track}, wherein, h represents target width, w represents target length, area represents target area, and d represents target circularity, and desc is the current HOG set of eigenvectors of target, and track is target trajectory set;
Motion target tracking module: for for each moving target, go out position and the size of this moving target in current frame image according to its moving target model assessment, and pluck out respective image as estimating target image f from current frame image; The eigenwert of calculating estimating target image f, comprises Harris angle point and extreme point and HOG descriptor thereof, obtains the characteristic information of estimating target image; The feature of estimating target image and former target image is mated, if the match is successful, upgrade the information of this moving target model, otherwise delete the information of this moving target model.
Compared with prior art, beneficial effect is the embodiment of the present invention:
(1) in video image method for expressing, owing to having less translation and the anglec of rotation between two continuous frames image, therefore, HOG descriptor has the characteristic of translation and invariable rotary, in addition, the Corner Feature of image is more stable, reliable, and angle point extraction is quicker with respect to SIFT, the feature extraction of SURF extreme point, is more conducive to real time video image and calculates; Comprehensive above-mentioned feature, the present invention adopts Harris angle point can obtain fast the invariant feature in video image in conjunction with the method for HOG descriptor, is applicable to very much real time video image and calculates, and provide reliable guarantee for follow-up projection matrix calculates;
(2) in the method for expressing of moving target, due to the Corner Feature of target image or extreme point spy all cannot independent completion expression target signature, for example: rounder and more smooth target may be extracted a small amount of angle point or cannot extract angle point at all, possibly cannot extract extreme point feature for the uniform target of color distribution, so all likely cause target's feature-extraction failure, thus cause cannot tracking target situation.Therefore, the present invention adopts the expression mode that angle point is combined with extreme point to carry out complete expression moving target, thereby guarantee the stability of following the tracks of, although this has wherein introduced the calculating of extreme point, improve computation complexity, but because moving target is less with respect to entire image, only moving target is calculated extreme point feature and entire image do not calculated to extreme point feature, still can meet the needs of real-time processing.
(3) in motion target tracking process, due to target between two continuous frames image moving displacement and the anglec of rotation little, can think that same target exists an affined transformation in two frames of front and back, the present invention is according to this prerequisite, utilize this affine transformation matrix can accurately locate position and the size of former target at present frame, finally the model parameter of target is upgraded, ensured the continuity of following the tracks of.The benefit that adopts this recognition methods is target following registration, and tracking sustainability is strong, can overcome the partial occlusion between target, and change of background and illumination variation are also had to good adaptability.
To sum up, the invention solves current video camera and under motion conditions, multiple moving targets are detected and the problem of following the tracks of in real time, and ensured real-time and reliability.Solving in image projection transformation matrix, adopt the Corner Feature coupling of image stabilization not only can greatly reduce calculated amount, and ensured the precision of Image Feature Matching simultaneously, thus estimate rapidly and accurately projective transformation matrix, finally ensure the precision of moving object detection; Adopt angle point and extreme point as target signature, make target identify robust more, tracking sustainability is strong, and partial occlusion, the attitude that can overcome between target change simultaneously, to change of background and illumination variation adaptability preferably; The method that adopts target travel to estimate has been dwindled the scope of target search, has reduced calculated amount, complicacy while greatly reducing target following.
Brief description of the drawings
Fig. 1 is video monitoring system structural drawing in the embodiment of the present invention.
Fig. 2 is moving target detecting method process flow diagram in the embodiment of the present invention.
Fig. 3 is the method for building up process flow diagram of moving target model in the embodiment of the present invention.
Fig. 4 is motion target tracking method process flow diagram in the embodiment of the present invention.
Fig. 5 is moving target two value model schematic diagram in the embodiment of the present invention.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein, only in order to explain the present invention, is not intended to limit the present invention.
Refer to Fig. 1, the video monitoring system that the present embodiment provides mainly comprises: vehicle-mounted vidicon 11, DSP12, moving object detection module 13 and motion target tracking module 14; Wherein, DSP12 processes the video image that vehicle-mounted vidicon 11 gathers, call moving object detection module 13 and extract moving target, if there is moving target to exist, call motion target tracking module 14 moving target is followed the tracks of, concrete processing procedure comprises: the step of moving object detection, set up the step of moving target model, the step of motion target tracking.Wherein,
(1) as shown in Figure 2, the step of moving object detection specifically comprises:
Step 101, collection two continuous frames image f (t-1), f (t) also carries out Gaussian smoothing.
Step 102, calculate above-mentioned 2 two field picture character pair points.
Specifically comprise: 1. extract two width image Harris Corner Features and set up HOG descriptor for each angle point, forming Corner Feature description vectors (x, y, d), wherein x, y is angle point transverse and longitudinal coordinate in image, d is HOG descriptor;
2. corners Matching: establish the Corner Feature vector set that image f (t-1) produces
Desc (t-1)={ (x 1, y 1, d 1), (x 2, y 2, d 2) ..., (x m, y m, d m), image f (t) produce Corner Feature vector set desc (t)=(x ' 1, y ' 1, d ' 1), (x' 2, y' 2, d' 2) ..., (x' n, y' n, d' n), the condition of corners Matching is:
(x i,y i,d i)→(x' j,y' j,d' j)if|d i-d' j|=arg?min{|d i-d′ 1|,|d i-d' 2|,...,|d i-d' n|}
Finally obtain corners Matching S set={ (x i, y i, d i) → (x ' i, y ' i, d ' i) | i=1,2,3..., m}.
3. on the basis of S set, utilize RANSAC algorithm filtering Mismatching point obtain S set ';
S'={ (x i, y i, d i) → (x ' i, y ' i, d ' i) | i=1,2,3..., n} and n≤m
Step 103, according to step 102 gained corresponding angles point set S', utilize the broad sense method of inverting to calculate projection matrix A:
Order
Step 104, former frame image f (t-1) is obtained to two field picture F do projective transformation, converts as follows:
Wherein, (u, v) is the pixel coordinate in image F, and (x, y) is image f (t-1) pixel coordinate;
Step 105, two field picture F and a rear two field picture f (t) are made to poor difference image D, the i.e. D=|F-f (t) of obtaining |, utilize threshold segmentation method to obtain bianry image B;
Step 106, bianry image B is carried out to filtering operation, specifically comprises:
1. bianry image B is carried out to burn into expansive working, remove interference noise point and also remove cavity;
2. on bianry image B, extract moving target and calculate barycenter, length and width, target circularity, the target area of target, as shown in Figure 5: the center that the barycenter of target is white portion,
target area=long * is wide
And all targets are joined to moving target chained list objList={a 1, a 2, a 3..., a n;
3. the empirical condition existing according to real goal is removed pseudo-target, obtains final moving target chained list reliably; Empirical condition is as follows: (i) target length breadth ratio is in interval [0.2,5.0]; (ii) target circularity is greater than 0.3; (iii) target area is greater than 200.
(2) set up the step of moving target model, as shown in Figure 3, comprising:
Step 201, cyclic access moving target chained list, the information (initial value of n is 1) of n moving target of extraction;
Step 202, according to the position of moving target and size, it is plucked out from current video image, calculate extreme point and Harris angle point, the calculating of extreme point adopts the method (only need extract in original video image scale size) of David Lowe suggestion, and calculates the HOG descriptor of each unique point;
Step 203, set up moving target model and preserve: model information comprises coordinate and the HOG descriptor of position, size, direction of motion, displacement, Corner and the extreme point of target in image;
Object module is defined as follows: a={h, w, area, d, desc, track}
Wherein, h represents target width, and w represents target length, and area represents target area, and d represents target circularity, and desc is the current HOG set of eigenvectors of target, and track is target trajectory set.
Step 204, n=n+1, if n is not more than target chained list length, go to step 201, otherwise go to step (3).
(3) step of motion target tracking, as shown in Figure 4, comprising:
Step 301, cyclic access moving target chained list, obtain j moving target model information a j;
Step 302, estimate position, the size of j moving target at present frame according to step (2) gained moving target model information, and it is plucked out from present frame obtain estimating target image; This step specifically comprises:
1. define track={ (x i, y i) | i=1,2,3 ... n} is target trajectory set, possible position (the x' of estimating target in present frame i, y' i) be
x' i=x i-1+Δx,y' i=y i-1+Δy
Width h'=h+k Δ y, length w'=w+k Δ x
Wherein,
2. according to 1. estimated parameter (x'i, y'i) of step, h', w' plucks out target and obtains target image f from present image.
Step 303, calculate extreme point and the angle point of target image f, and calculate corresponding HOG descriptor and obtain estimating target vector a' j;
Step 304, target identification: coupling estimating target vector a' jwith former object vector a jif the match is successful, show target following success, upgrade moving target model information a j; If it fails to match, follow the tracks of unsuccessfully, moving target model information is deleted from chained list.
1. utilize the coupling of method described in step 102 a ' jand a jin Corner Feature, if the match is successful, carry out lower step, otherwise delete the model information a of j moving target j.
2. utilize method described in step 103 to calculate target a jprojection matrix A a, recycling step 104 described in method can try to achieve target a jcenter-of-mass coordinate in present image, length and width, angle point information;
3. utilize step 2. required parameter upgrade the model information a of j moving target j;
Step 305, j=j+1, if j is less than or equal to target chained list length, go to step 301, otherwise finish.
Correspondingly, corresponding with above-mentioned moving object detection and tracking method, the embodiment of the present invention also provides a kind of moving object detection and tracking system, and this system comprises moving object detection module, moving target model building module and motion target tracking module, wherein:
Moving object detection module: for two two field picture f (t-1), f (t) to continuous acquisition and carry out Gaussian smoothing, adopt Harris angle point to extract image f (t-1) and f (t) character pair point in conjunction with the mode of HOG descriptor, obtain character pair point set S'; According to the S set of character pair point ', utilize generalized inverse method to try to achieve projective transformation matrix A; Former frame image f (t-1) is obtained to image F according to projective transformation matrix do projective transformation; Two field picture and a rear two field picture f (t) are done poor to obtain difference image D, then passing threshold is cut apart and is obtained bianry image B; Bianry image B is carried out to filtering operation and obtain all moving targets;
Wherein, adopt Harris angle point to extract image f (t-1) and f (t) character pair point in conjunction with the mode of HOG descriptor, obtaining character pair point set S' specifically comprises: extract respectively the Harris Corner Feature of image f (t-1) and f (t) and set up HOG descriptor for each Harris angle point, form Corner Feature description vectors (x, y, d); Wherein, x and y are respectively horizontal ordinate and the ordinate of Harris angle point in image, and d is HOG descriptor; The Corner Feature vector set of image f (t-1) and f (t) is carried out to matching treatment, obtain corners Matching S set={ (x i, y i, d i) → (x ' i, y ' i, d ' i) | i=1,2,3..., m}; Utilize the Mismatching point in RANSAC algorithm filtering corners Matching S set to obtain character pair point set S', wherein,
S'={ (x i, y i, d i) → (x ' i, y ' i, d ' i) | i=1,2,3..., n} and n≤m;
Set up moving target model module: for for each moving target, calculate position and the size of this moving target, from current frame image, pluck out accordingly target image, calculate the HOG descriptor of this target image extreme point, Harris angle point and each unique point; The corresponding moving target information of obtaining according to detection process of moving target is set up moving target model, and the mode that adopts position, size, direction of motion, displacement, Harris angle point, extreme point and the HOG descriptor of moving target in image to combine in this model represents moving target feature; Wherein, moving target model definition is a={h, w, area, d, desc, track}, wherein, h represents target width, w represents target length, area represents target area, and d represents target circularity, and desc is the current HOG set of eigenvectors of target, and track is target trajectory set;
Motion target tracking module: for for each moving target, go out position and the size of this moving target in current frame image according to its moving target model assessment, and pluck out respective image as estimating target image f from current frame image; The eigenwert of calculating estimating target image f, comprises Harris angle point and extreme point and HOG descriptor thereof, obtains the characteristic information of estimating target image; The feature of estimating target image and former target image is mated, if the match is successful, upgrade the information of this moving target model, otherwise delete the information of this moving target model.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, all any amendments of doing within the spirit and principles in the present invention, be equal to and replace and improvement etc., within all should being included in protection scope of the present invention.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201010515055.8A CN102456225B (en) | 2010-10-22 | 2010-10-22 | Video monitoring system and moving target detecting and tracking method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201010515055.8A CN102456225B (en) | 2010-10-22 | 2010-10-22 | Video monitoring system and moving target detecting and tracking method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102456225A CN102456225A (en) | 2012-05-16 |
CN102456225B true CN102456225B (en) | 2014-07-09 |
Family
ID=46039388
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201010515055.8A CN102456225B (en) | 2010-10-22 | 2010-10-22 | Video monitoring system and moving target detecting and tracking method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102456225B (en) |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103200358B (en) * | 2012-01-06 | 2016-04-13 | 杭州普维光电技术有限公司 | Coordinate transformation method between video camera and target scene and device |
CN102799883B (en) * | 2012-06-29 | 2015-07-22 | 广州中国科学院先进技术研究所 | Method and device for extracting movement target from video image |
CN103824305A (en) * | 2014-03-17 | 2014-05-28 | 天津工业大学 | Improved Meanshift target tracking method |
CN103996028B (en) * | 2014-05-21 | 2017-04-05 | 南京航空航天大学 | A kind of vehicle Activity recognition method |
CN104504724B (en) * | 2015-01-15 | 2018-04-06 | 杭州国策商图科技有限公司 | A kind of moving body track and extraction algorithm not influenceed by barrier |
CN105025198B (en) * | 2015-07-22 | 2019-01-01 | 东方网力科技股份有限公司 | A kind of group technology of the video frequency motion target based on Spatio-temporal factors |
CN106534614A (en) * | 2015-09-10 | 2017-03-22 | 南京理工大学 | Rapid movement compensation method of moving target detection under mobile camera |
CN105321180A (en) * | 2015-10-21 | 2016-02-10 | 浪潮(北京)电子信息产业有限公司 | Target tracking and positioning method and apparatus based on cloud computing |
CN105427344B (en) * | 2015-11-18 | 2018-04-03 | 国网江苏省电力有限公司检修分公司 | Moving target detecting method in a kind of substation intelligence system |
CN107105193B (en) * | 2016-02-23 | 2020-03-20 | 芋头科技(杭州)有限公司 | Robot monitoring system based on human body information |
CN105933698A (en) * | 2016-04-14 | 2016-09-07 | 吴本刚 | Intelligent satellite digital TV program play quality detection system |
CN106815856B (en) * | 2017-01-13 | 2019-07-16 | 大连理工大学 | A kind of moving-target Robust Detection Method under area array camera rotary scanning |
CN107481269B (en) * | 2017-08-08 | 2020-07-03 | 西安科技大学 | Multi-camera moving object continuous tracking method for mine |
CN107704797B (en) * | 2017-08-08 | 2020-06-23 | 深圳市安软慧视科技有限公司 | Real-time detection method, system and equipment based on pedestrians and vehicles in security video |
US20200074678A1 (en) * | 2018-08-28 | 2020-03-05 | Beijing Jingdong Shangke Information Technology Co., Ltd. | Device and method of tracking poses of multiple objects based on single-object pose estimator |
CN110188754A (en) * | 2019-05-29 | 2019-08-30 | 腾讯科技(深圳)有限公司 | Image partition method and device, model training method and device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101009021A (en) * | 2007-01-25 | 2007-08-01 | 复旦大学 | Video stabilizing method based on matching and tracking of characteristic |
CN101109818A (en) * | 2006-07-20 | 2008-01-23 | 中国科学院自动化研究所 | Method for automatically selecting remote sensing image high-precision control point |
-
2010
- 2010-10-22 CN CN201010515055.8A patent/CN102456225B/en active IP Right Grant
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101109818A (en) * | 2006-07-20 | 2008-01-23 | 中国科学院自动化研究所 | Method for automatically selecting remote sensing image high-precision control point |
CN101009021A (en) * | 2007-01-25 | 2007-08-01 | 复旦大学 | Video stabilizing method based on matching and tracking of characteristic |
Non-Patent Citations (2)
Title |
---|
应用角点匹配实现目标跟踪;罗刚等;《中国光学与应用光学》;20091231;第2卷(第06期);1-4 * |
罗刚等.应用角点匹配实现目标跟踪.《中国光学与应用光学》.2009,第2卷(第06期),1-4. |
Also Published As
Publication number | Publication date |
---|---|
CN102456225A (en) | 2012-05-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107025668B (en) | Design method of visual odometer based on depth camera | |
Hu et al. | Moving object detection and tracking from video captured by moving camera | |
Zhou et al. | Efficient road detection and tracking for unmanned aerial vehicle | |
US9269012B2 (en) | Multi-tracker object tracking | |
Yilmaz et al. | Contour-based object tracking with occlusion handling in video acquired using mobile cameras | |
EP2710554B1 (en) | Head pose estimation using rgbd camera | |
Brox et al. | Large displacement optical flow | |
US8368766B2 (en) | Video stabilizing method and system using dual-camera system | |
Senior et al. | Appearance models for occlusion handling | |
US7376246B2 (en) | Subspace projection based non-rigid object tracking with particle filters | |
Zhu et al. | Object tracking in structured environments for video surveillance applications | |
Cohen et al. | Detecting and tracking moving objects for video surveillance | |
US8958600B2 (en) | Monocular 3D pose estimation and tracking by detection | |
Zhao et al. | Segmentation and tracking of multiple humans in complex situations | |
Wu et al. | Real-time human detection using contour cues | |
CN105245841B (en) | A kind of panoramic video monitoring system based on CUDA | |
Li et al. | Saliency model-based face segmentation and tracking in head-and-shoulder video sequences | |
US7831094B2 (en) | Simultaneous localization and mapping using multiple view feature descriptors | |
Kondori et al. | 3D head pose estimation using the Kinect | |
Paragios et al. | Geodesic active regions for motion estimation and tracking | |
Venkatesh et al. | Efficient object-based video inpainting | |
KR101733131B1 (en) | 3D motion recognition method and apparatus | |
Shi et al. | Real-time tracking using level sets | |
Bleiweiss et al. | Fusing time-of-flight depth and color for real-time segmentation and tracking | |
Kundu et al. | Moving object detection by multi-view geometric techniques from a single camera mounted robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
C06 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
C10 | Entry into substantive examination | ||
GR01 | Patent grant | ||
C14 | Grant of patent or utility model |