CN112990012A - Tool color identification method and system under shielding condition - Google Patents

Tool color identification method and system under shielding condition Download PDF

Info

Publication number
CN112990012A
CN112990012A CN202110275035.6A CN202110275035A CN112990012A CN 112990012 A CN112990012 A CN 112990012A CN 202110275035 A CN202110275035 A CN 202110275035A CN 112990012 A CN112990012 A CN 112990012A
Authority
CN
China
Prior art keywords
image
pedestrian
video frames
occlusion
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110275035.6A
Other languages
Chinese (zh)
Inventor
柯海滨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xiwei Intelligent Technology Co ltd
Original Assignee
Shenzhen Xiwei Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xiwei Intelligent Technology Co ltd filed Critical Shenzhen Xiwei Intelligent Technology Co ltd
Priority to CN202110275035.6A priority Critical patent/CN112990012A/en
Publication of CN112990012A publication Critical patent/CN112990012A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Abstract

The invention relates to a tool color identification method and system under a shielding condition. The method comprises the following steps: in response to the fact that the pedestrian in the image is shielded, obtaining a complete image without shielding and a suspicious region with shielding according to the segmented image of the pedestrian in the video frames; and inputting the complete picture and the doubt area into a pre-trained deep learning color classification model to obtain a color identification result of the tool worn by the pedestrian. The method can realize accurate identification of colors of the tool worn by the pedestrian under the shielding condition.

Description

Tool color identification method and system under shielding condition
Technical Field
The invention relates to the field of image recognition, in particular to a tool color recognition method and system under a shielding condition.
Background
The existing common tool color identification method adopts a color identification model to identify through pictures, but the existing color identification model often influences the accuracy of the color identification of the tool because the personnel remove objects or shields the tool when the personnel remove the objects.
Disclosure of Invention
Aiming at the technical problem, the invention provides a tool color identification method and system under a shielding condition.
The technical scheme for solving the technical problems is as follows:
a tool color identification method under a shielding condition comprises the following steps:
in response to the fact that the pedestrian in the image is shielded, obtaining a complete image without shielding and a suspicious region with shielding according to the segmented image of the pedestrian in the video frames;
and inputting the complete picture and the doubt area into a pre-trained deep learning color classification model to obtain a color identification result of the tool worn by the pedestrian.
On the basis of the technical scheme, the invention can be further improved as follows.
Further, whether the pedestrian in the image is blocked is identified, and the method specifically comprises the following steps:
training an occlusion classification model by using occlusion and non-occlusion data sets;
and inputting the image into the occlusion classification model, and determining whether the pedestrian in the image is occluded.
Further, obtaining a complete image without occlusion according to the segmentation image of the pedestrian in the plurality of video frames specifically comprises:
and completing the segmentation images of the pedestrians in the plurality of video frames by an image completion technology to obtain a complete image without occlusion.
Further, obtaining a masked suspicious region according to the segmentation map of the pedestrian in the plurality of video frames specifically comprises:
and subtracting the whole image from the segmentation image to obtain a shielded doubt area.
Further, the process of acquiring the segmentation map specifically includes:
acquiring a plurality of video frames at preset time intervals;
assigning an identity tag to each pedestrian in the plurality of video frames using multi-target tracking techniques;
and inputting the plurality of video frames into an example segmentation model, and obtaining segmentation maps of pedestrians with the same identity label in the plurality of video frames.
In order to achieve the above object, the present invention further provides a tool color recognition system under a shielding condition, including:
the image processing module is used for responding to the fact that the pedestrian in the image is shielded, and obtaining a complete image without shielding and a suspicious region with shielding according to the segmented image of the pedestrian in the plurality of video frames;
and the color identification module is used for inputting the complete picture and the in-doubt area into a pre-trained deep learning color classification model to obtain a color identification result of the tool worn by the pedestrian.
Further, still include and shelter from the identification module for whether the pedestrian in the discernment image is sheltered from, specifically include:
the model training unit is used for training the occlusion classification model by utilizing the occlusion data set and the occlusion-free data set;
and the occlusion identification unit is used for inputting the image into the occlusion classification model and determining whether the pedestrian in the image is occluded.
Further, the image processing module specifically includes:
and the image completion unit is used for completing the segmentation images of the pedestrians in the plurality of video frames through an image completion technology to obtain a complete image without occlusion.
Further, the image processing module specifically includes:
and the shielding area acquisition unit is used for subtracting the segmentation image from the complete image to obtain a shielding doubt area.
Further, the method further includes a segmentation map processing module, configured to obtain a segmentation map, and specifically includes:
the video frame acquisition unit is used for acquiring a plurality of video frames at preset time intervals;
a target tracking unit for assigning an identity tag to each pedestrian in the plurality of video frames using multi-target tracking techniques;
and the example segmentation unit is used for inputting the plurality of video frames into an example segmentation model and acquiring segmentation maps of pedestrians with the same identity label in the plurality of video frames.
The invention has the beneficial effects that:
the color identification method can realize accurate identification of colors of the tool worn by the pedestrian under the shielding condition.
Drawings
Fig. 1 is a flowchart of a tool color identification method under a shielding condition according to an embodiment of the present invention;
FIG. 2 is a view of an actual scene of a pedestrian;
FIG. 3 is a segmentation chart of an example pedestrian after segmentation;
FIG. 4 is a masked in-doubt area;
FIG. 5 is a flow chart of a segmentation graph acquisition process;
FIG. 6 is a view of a scene where a pedestrian is occluded by a large area;
fig. 7 is a block diagram of a tooling color recognition system under a shielding condition according to an embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
Fig. 1 is a flowchart of a tool color identification method under a shielding condition according to an embodiment of the present invention, and as shown in fig. 1, the method includes:
110. in response to the fact that the pedestrian in the image is shielded, obtaining a complete image without shielding and a suspicious region with shielding according to the segmented image of the pedestrian in the video frames;
as shown in fig. 2, is an actual scene image of a pedestrian.
Optionally, in this embodiment, identifying whether the pedestrian in the image is occluded may be implemented by: training an occlusion classification model by using occlusion and non-occlusion data sets; and inputting the image into the occlusion classification model, and determining whether the pedestrian in the image is occluded.
The tool color identification of the non-shielded pedestrian can be realized by adopting the existing method, the invention tracks and identifies the shielded pedestrian, and the video frame image is processed by the example segmentation model to obtain the segmentation map of the pedestrian.
The segmentation map of the pedestrian example after segmentation is shown in fig. 3.
After the segmentation maps of the pedestrians in the input multiple video frames are obtained, a complete map without occlusion can be further obtained, specifically, the segmentation maps of the pedestrians in the multiple video frames can be complemented through an image complementing technology to obtain a complete map without occlusion, and finally, the segmentation maps and the complete map are subtracted to obtain an occlusion doubt area, as shown in fig. 4.
120. And inputting the complete picture and the doubt area into a pre-trained deep learning color classification model to obtain a color identification result of the tool worn by the pedestrian.
Specifically, in the step, the complete image of the pedestrian is used as a positive sample, the suspicious region is used as a negative sample, and the complete image and the suspicious region are input into the deep learning color classification model together for training, so that the model can learn the main body color of the pedestrian, the color of the shielding object is not used as the characteristic judgment, and the color of the pedestrian wearing the tool can be accurately judged by training the model through a large amount of data.
Optionally, in this embodiment, as shown in fig. 5, the obtaining process of the segmentation map specifically includes:
510. acquiring a plurality of video frames at preset time intervals;
520. assigning an identity tag to each pedestrian in the plurality of video frames using multi-target tracking techniques;
530. and inputting the video frames into an example segmentation model, and acquiring segmentation maps of pedestrians with the same identity label in the video frames.
Specifically, if a pedestrian is blocked by a large area as shown in fig. 6 in a single picture, false alarm is easily caused, and therefore, multiple video frames need to be captured for processing. In this embodiment, the time interval and the frame number between the video frames can be determined according to the actual scene, for example, 5 frames of pictures are cut out, the interval of each frame is 1s, and the 5 frames of pictures are used as the input of the example segmentation model.
In order to avoid false identification, it is necessary to ensure that the obtained video frames are the same person, so the video needs to use target tracking, and the multi-target tracking technology is utilized to allocate ID to each person in the video frames, thereby ensuring that 5 pictures of the same person are obtained. And further carrying out example segmentation on the pedestrians with the same ID to obtain 5 segmentation maps of the pedestrians corresponding to the ID.
Fig. 7 is a block diagram of a tooling color identification system under a shielding condition according to an embodiment of the present invention, where the functional principle of each functional module in the system has been specifically explained in the foregoing, and is not described in detail below.
As shown in fig. 7, the system includes:
the image processing module is used for responding to the fact that the pedestrian in the image is shielded, and obtaining a complete image without shielding and a suspicious region with shielding according to the segmented image of the pedestrian in the plurality of video frames;
and the color identification module is used for inputting the complete picture and the in-doubt area into a pre-trained deep learning color classification model to obtain a color identification result of the tool worn by the pedestrian.
Optionally, in this embodiment, the system further includes an occlusion recognition module, configured to recognize whether the pedestrian in the image is occluded, specifically including:
the model training unit is used for training the occlusion classification model by utilizing the occlusion data set and the occlusion-free data set;
and the occlusion identification unit is used for inputting the image into the occlusion classification model and determining whether the pedestrian in the image is occluded.
Optionally, in this embodiment, the image processing module specifically includes:
and the image completion unit is used for completing the segmentation images of the pedestrians in the plurality of video frames through an image completion technology to obtain a complete image without occlusion.
Optionally, in this embodiment, the image processing module specifically further includes:
and the shielding area acquisition unit is used for subtracting the segmentation image from the complete image to obtain a shielding doubt area.
Optionally, in this embodiment, the system further includes a segmentation map processing module, configured to obtain a segmentation map, where the segmentation map specifically includes:
the video frame acquisition unit is used for acquiring a plurality of video frames at preset time intervals;
a target tracking unit for assigning an identity tag to each pedestrian in the plurality of video frames using multi-target tracking techniques;
and the example segmentation unit is used for inputting the plurality of video frames into an example segmentation model and acquiring segmentation maps of pedestrians with the same identity label in the plurality of video frames.
The reader should understand that in the description of this specification, reference to the description of the terms "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the modules and units in the above described system embodiment may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A tool color identification method under a shielding condition is characterized by comprising the following steps:
in response to the fact that the pedestrian in the image is shielded, obtaining a complete image without shielding and a suspicious region with shielding according to the segmented image of the pedestrian in the video frames;
and inputting the complete picture and the doubt area into a pre-trained deep learning color classification model to obtain a color identification result of the tool worn by the pedestrian.
2. The method according to claim 1, wherein identifying whether the pedestrian in the image is occluded specifically comprises:
training an occlusion classification model by using occlusion and non-occlusion data sets;
and inputting the image into the occlusion classification model, and determining whether the pedestrian in the image is occluded.
3. The method according to claim 1, wherein obtaining a complete image without occlusion from the segmented image of the pedestrian in the plurality of video frames specifically comprises:
and completing the segmentation images of the pedestrians in the plurality of video frames by an image completion technology to obtain a complete image without occlusion.
4. The method according to claim 1, wherein obtaining the hidden suspicious region according to the segmentation map of the pedestrian in the plurality of video frames specifically comprises:
and subtracting the whole image from the segmentation image to obtain a shielded doubt area.
5. The method according to any one of claims 1 to 4, wherein the segmentation map obtaining process specifically includes:
acquiring a plurality of video frames at preset time intervals;
assigning an identity tag to each pedestrian in the plurality of video frames using multi-target tracking techniques;
and inputting the plurality of video frames into an example segmentation model, and obtaining segmentation maps of pedestrians with the same identity label in the plurality of video frames.
6. The utility model provides a frock color recognition system under sheltering from condition which characterized in that includes:
the image processing module is used for responding to the fact that the pedestrian in the image is shielded, and obtaining a complete image without shielding and a suspicious region with shielding according to the segmented image of the pedestrian in the plurality of video frames;
and the color identification module is used for inputting the complete picture and the in-doubt area into a pre-trained deep learning color classification model to obtain a color identification result of the tool worn by the pedestrian.
7. The system according to claim 6, further comprising an occlusion recognition module for recognizing whether the pedestrian in the image is occluded, specifically comprising:
the model training unit is used for training the occlusion classification model by utilizing the occlusion data set and the occlusion-free data set;
and the occlusion identification unit is used for inputting the image into the occlusion classification model and determining whether the pedestrian in the image is occluded.
8. The system according to claim 6, wherein the image processing module specifically comprises:
and the image completion unit is used for completing the segmentation images of the pedestrians in the plurality of video frames through an image completion technology to obtain a complete image without occlusion.
9. The system according to claim 6, wherein the image processing module further includes:
and the shielding area acquisition unit is used for subtracting the segmentation image from the complete image to obtain a shielding doubt area.
10. The system according to any one of claims 6 to 9, further comprising a segmentation map processing module, configured to obtain a segmentation map, specifically including:
the video frame acquisition unit is used for acquiring a plurality of video frames at preset time intervals;
a target tracking unit for assigning an identity tag to each pedestrian in the plurality of video frames using multi-target tracking techniques;
and the example segmentation unit is used for inputting the plurality of video frames into an example segmentation model and acquiring segmentation maps of pedestrians with the same identity label in the plurality of video frames.
CN202110275035.6A 2021-03-15 2021-03-15 Tool color identification method and system under shielding condition Pending CN112990012A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110275035.6A CN112990012A (en) 2021-03-15 2021-03-15 Tool color identification method and system under shielding condition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110275035.6A CN112990012A (en) 2021-03-15 2021-03-15 Tool color identification method and system under shielding condition

Publications (1)

Publication Number Publication Date
CN112990012A true CN112990012A (en) 2021-06-18

Family

ID=76335551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110275035.6A Pending CN112990012A (en) 2021-03-15 2021-03-15 Tool color identification method and system under shielding condition

Country Status (1)

Country Link
CN (1) CN112990012A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107766861A (en) * 2017-11-14 2018-03-06 深圳码隆科技有限公司 The recognition methods of character image clothing color, device and electronic equipment
CN109670429A (en) * 2018-12-10 2019-04-23 广东技术师范学院 A kind of the monitor video multiple target method for detecting human face and system of Case-based Reasoning segmentation
CN109948474A (en) * 2019-03-04 2019-06-28 成都理工大学 AI thermal imaging all-weather intelligent monitoring method
CN111325806A (en) * 2020-02-18 2020-06-23 苏州科达科技股份有限公司 Clothing color recognition method, device and system based on semantic segmentation
CN111898561A (en) * 2020-08-04 2020-11-06 腾讯科技(深圳)有限公司 Face authentication method, device, equipment and medium
US20210056715A1 (en) * 2019-08-20 2021-02-25 Boe Technology Group Co., Ltd. Object tracking method, object tracking device, electronic device and storage medium
CN112464893A (en) * 2020-12-10 2021-03-09 山东建筑大学 Congestion degree classification method in complex environment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107766861A (en) * 2017-11-14 2018-03-06 深圳码隆科技有限公司 The recognition methods of character image clothing color, device and electronic equipment
CN109670429A (en) * 2018-12-10 2019-04-23 广东技术师范学院 A kind of the monitor video multiple target method for detecting human face and system of Case-based Reasoning segmentation
CN109948474A (en) * 2019-03-04 2019-06-28 成都理工大学 AI thermal imaging all-weather intelligent monitoring method
US20210056715A1 (en) * 2019-08-20 2021-02-25 Boe Technology Group Co., Ltd. Object tracking method, object tracking device, electronic device and storage medium
CN111325806A (en) * 2020-02-18 2020-06-23 苏州科达科技股份有限公司 Clothing color recognition method, device and system based on semantic segmentation
CN111898561A (en) * 2020-08-04 2020-11-06 腾讯科技(深圳)有限公司 Face authentication method, device, equipment and medium
CN112464893A (en) * 2020-12-10 2021-03-09 山东建筑大学 Congestion degree classification method in complex environment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
于博 等: "远红外车载图像实时行人检测与自适应实例分割", 《激光与光电子学进展》, vol. 57, no. 2, pages 021507 - 1 *

Similar Documents

Publication Publication Date Title
CN107527009B (en) Remnant detection method based on YOLO target detection
CN105938622B (en) Method and apparatus for detecting object in moving image
US8340420B2 (en) Method for recognizing objects in images
CN108805900B (en) Method and device for determining tracking target
US20130243343A1 (en) Method and device for people group detection
CN103093198B (en) A kind of crowd density monitoring method and device
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN105631418A (en) People counting method and device
CN110909692A (en) Abnormal license plate recognition method and device, computer storage medium and electronic equipment
CN106570439B (en) Vehicle detection method and device
CN112633255B (en) Target detection method, device and equipment
CN111383244B (en) Target detection tracking method
CN105868708A (en) Image object identifying method and apparatus
CN113449606B (en) Target object identification method and device, computer equipment and storage medium
CN107346417B (en) Face detection method and device
CN111353385A (en) Pedestrian re-identification method and device based on mask alignment and attention mechanism
CN113468914A (en) Method, device and equipment for determining purity of commodities
US20200394802A1 (en) Real-time object detection method for multiple camera images using frame segmentation and intelligent detection pool
CN117475353A (en) Video-based abnormal smoke identification method and system
CN111753642B (en) Method and device for determining key frame
CN112686247A (en) Identification card number detection method and device, readable storage medium and terminal
KR101595334B1 (en) Method and apparatus for movement trajectory tracking of moving object on animal farm
CN112990012A (en) Tool color identification method and system under shielding condition
CN113255549B (en) Intelligent recognition method and system for behavior state of wolf-swarm hunting
CN115083004A (en) Identity recognition method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination