CN111340837A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN111340837A
CN111340837A CN202010098364.3A CN202010098364A CN111340837A CN 111340837 A CN111340837 A CN 111340837A CN 202010098364 A CN202010098364 A CN 202010098364A CN 111340837 A CN111340837 A CN 111340837A
Authority
CN
China
Prior art keywords
image
segmented
images
close
target vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010098364.3A
Other languages
Chinese (zh)
Inventor
周康明
邹芳喻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eye Control Technology Co Ltd
Original Assignee
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eye Control Technology Co Ltd filed Critical Shanghai Eye Control Technology Co Ltd
Priority to CN202010098364.3A priority Critical patent/CN111340837A/en
Publication of CN111340837A publication Critical patent/CN111340837A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to an image processing method, a device, equipment and a storage medium, a terminal obtains a plurality of segmentation images by obtaining a splicing image of a target vehicle suspected to run illegally and segmenting the splicing image, further detecting whether each of the divided images is a close-up image, deleting the close-up image from the plurality of divided images, determining whether the target vehicle is illegally driven using the remaining divided images, wherein, the spliced image is spliced by a plurality of captured road images of the target vehicles, so that when the spliced image of the target vehicles suspected to run illegally is checked for the second time, a close-up image of the target vehicle occupying most of the image is removed and then a secondary review is performed, and the area of the target vehicle occupying the image in other segmented images is small, so that the accuracy of secondary verification of the spliced image is improved.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to an image processing method, an image processing apparatus, an image processing device, and a storage medium.
Background
Along with the increase of the number of vehicles, the illegal driving of the vehicles sometimes happens when convenience is brought to people for traffic traveling.
In the traditional method, a camera arranged on a road is used for capturing road images, and the captured road images are input into a preset traffic violation identification model, so that whether the vehicle runs illegally is identified through the traffic violation identification model. When a suspected illegal vehicle exists, splicing a plurality of captured road images of the suspected illegal vehicle to obtain a spliced image, and performing secondary audit on the spliced image to determine whether the vehicle runs illegally.
However, the accuracy of performing the secondary audit on the spliced image is low.
Disclosure of Invention
Therefore, it is necessary to provide an image processing method, an apparatus, a device, and a storage medium for solving the problem of low accuracy of secondary audit of a stitched image.
In a first aspect, a method of image processing, the method comprising:
acquiring a spliced image of a target vehicle suspected to run illegally, wherein the spliced image is obtained by splicing a plurality of captured road images of the target vehicle;
performing segmentation processing on the spliced image to obtain a plurality of segmented images;
and detecting whether each segmented image is a close-up image or not, deleting the close-up image from the plurality of segmented images, and determining whether the target vehicle illegally runs or not by using the rest segmented images.
In one embodiment, the detecting whether each segmented image is a close-up image includes:
detecting whether each segmented image is a close-up image according to the image characteristic information of each segmented image; wherein the image characteristic information includes at least one of a size of the target vehicle, a number of vehicles in the segmented image, a background image of the segmented image, and a road sign in the segmented image.
In one embodiment, the detecting whether each of the segmented images is a close-up image according to the image feature information of each of the segmented images includes:
and for each segmented image, detecting whether the size of the target vehicle in the segmented image is larger than a preset size threshold value, and if the size of the target vehicle in the segmented image is larger than the preset size threshold value, determining that the segmented image is a close-up image.
In one embodiment, the detecting whether each of the segmented images is a close-up image according to the image feature information of each of the segmented images includes:
and for each segmented image, detecting whether the number of vehicles in the segmented image is larger than a preset number threshold value, and if the number of vehicles in the segmented image is smaller than the preset number threshold value, determining that the segmented image is a close-up image.
In one embodiment, the detecting whether each of the segmented images is a close-up image according to the image feature information of each of the segmented images includes:
extracting corresponding background images from the segmentation images respectively;
comparing background images extracted from the segmented images, and determining whether a target background image exists in each background image according to a comparison result, wherein the difference between the target background image and other background images is greater than a preset difference threshold value;
and determining the segmentation image corresponding to the target background image as a close-up image.
In one embodiment, the extracting the background image from each of the segmented images respectively includes:
acquiring the position information of the target vehicle in each segmented image to obtain a position information set of the target vehicle on each segmented image;
and setting the pixels of the area corresponding to the position information set on each segmented image as the same preset value to obtain the background image corresponding to each segmented image.
In one embodiment, the detecting whether each of the segmented images is a close-up image according to the image feature information of each of the segmented images includes:
identifying a road mark in each segmented image;
acquiring similarity between every two road identifications recognized from each segmentation image, and determining whether a target segmentation image exists in each segmentation image according to the similarity, wherein the similarity between the road identification in the target segmentation image and the road identification in other segmentation images is smaller than a preset similarity threshold;
the target segmented image is determined to be a close-up image.
In one embodiment, the segmenting the stitched image to obtain a plurality of segmented images includes:
segmenting the spliced image according to a preset segmentation mode to obtain a plurality of pre-segmented images; the preset segmentation mode is used for indicating the corresponding size of each pre-segmentation image;
and inputting the pre-segmentation images into a preset boundary identification model to obtain a plurality of segmentation images.
In a second aspect, an image processing apparatus, the apparatus comprising:
the system comprises an acquisition module, a judgment module and a display module, wherein the acquisition module is used for acquiring a spliced image of a target vehicle suspected to run illegally, and the spliced image is obtained by splicing a plurality of captured road images of the target vehicle;
the segmentation module is used for carrying out segmentation processing on the spliced image to obtain a plurality of segmented images;
and the determining module is used for detecting whether each segmented image is a close-up image, deleting the close-up image from the plurality of segmented images and determining whether the target vehicle illegally runs by using the rest segmented images.
In a third aspect, a computer device comprises a memory storing a computer program and a processor implementing the method steps of the first aspect when the processor executes the computer program.
In a fourth aspect, a computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the method steps of the first aspect as described above.
According to the image processing method, the image processing device, the terminal and the storage medium, the terminal obtains a plurality of segmentation images by obtaining the spliced image of the target transportation means suspected to run illegally and performing segmentation processing on the spliced image, further detects whether each segmentation image is a close-up image and deletes the close-up image from the segmentation images, and determines whether the target transportation means runs illegally by using the rest segmentation images, wherein the spliced image is obtained by splicing a plurality of captured road images of the target transportation means, so that when performing secondary verification on the spliced image of the target transportation means suspected to run illegally, after removing most of close-up images in the image occupied by the target transportation means, the secondary verification is performed on other segmentation images, and the area occupied by the target transportation means in other segmentation images is small, so that whether the target transportation means runs illegally can be judged more clearly, the accuracy of whether the determined target vehicle illegally runs is improved, and the accuracy of secondary verification of the spliced image is further improved.
Drawings
FIG. 1 is a diagram illustrating an exemplary environment in which an image processing method may be applied;
FIG. 2 is a flow diagram illustrating a method for image processing according to one embodiment;
FIG. 2a is a schematic illustration of stitching images in one embodiment;
FIG. 2b is a schematic illustration of stitching images in one embodiment;
FIG. 2c is a schematic illustration of stitching images in one embodiment;
FIG. 3 is a flow chart illustrating an image processing method according to another embodiment;
FIG. 4 is a flowchart illustrating an image processing method according to another embodiment;
FIG. 5 is a flowchart illustrating an image processing method according to another embodiment;
FIG. 6 is a flowchart illustrating an image processing method according to another embodiment;
FIG. 7 is a flowchart illustrating an image processing method according to another embodiment;
fig. 8 is a schematic configuration diagram of an image processing apparatus provided in one embodiment;
FIG. 9 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
The image processing method, the image processing device, the image processing equipment and the storage medium aim at solving the problem of low accuracy of secondary audit. The following describes in detail the technical solutions of the present application and how the technical solutions of the present application solve the above technical problems by embodiments and with reference to the drawings. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject is an image processing terminal, where the execution subject may also be an image processing apparatus, and the apparatus may be implemented as part or all of the image processing terminal by software, hardware, or a combination of software and hardware.
The image processing method provided by the embodiment can be applied to the application environment shown in fig. 1. The image processing terminal 100 may perform a secondary audit on the stitched image 110 to determine whether the vehicle is illegally driven. The image processing terminal 100 may be, but is not limited to, an electronic device with a data processing function, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, or a personal digital assistant, and the specific form of the image processing terminal 100 is not limited in this embodiment.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments.
FIG. 2 is a flowchart illustrating an image processing method according to an embodiment. The present embodiment relates to a specific process of how to take out a close-up image from a plurality of divided images and determine whether a target vehicle is illegally driven from the remaining divided images. As shown in fig. 2, the method comprises the steps of:
s101, obtaining a spliced image of a target vehicle suspected to run illegally, wherein the spliced image is obtained by splicing a plurality of captured road images of the target vehicle.
The target vehicle may be any one of a suspected illegal-driving household automobile, an engineering vehicle, a motorcycle, an electric vehicle and a bicycle, which is not limited in the embodiment of the present application. The spliced image can be obtained by splicing a plurality of captured road images including the target vehicles, one spliced image can include a road image of a target vehicle suspected to run illegally, and also can include road images of a plurality of target vehicles suspected to run illegally. The sizes of the road images in the stitched image may be the same or different, which is not limited in the embodiment of the present application. The embodiment of the application does not limit the splicing mode of each road image in the spliced image. For example, if there are 3 road images in the stitched image, the stitched image may be 3 road images arranged in a horizontal row as shown in fig. 2a, may also be 3 road images arranged in a vertical row as shown in fig. 2b, and may also be a first row 1 road image and a second row 2 road image as shown in fig. 2c, which is not limited in this embodiment of the present application.
When it is finally determined whether the target vehicle is illegally driven, the image of the target vehicle which is identified by the first identification model and is suspected to be illegally driven needs to be subjected to secondary verification. The first identification model can be a neural network model and is used for identifying road images of target vehicles suspected of illegal driving from road images acquired by image acquisition devices arranged at two sides of a road. When the road image of the target vehicle suspected to run illegally exists, the first identification model can output a plurality of road images of the target vehicles suspected to run illegally, and can also output a spliced image obtained by splicing the road images of the same target vehicle suspected to run illegally. Correspondingly, when the road image of the target vehicle suspected of illegally traveling is stored, the spliced image output by the first recognition model can be directly stored, or the road image of the target vehicle suspected of illegally traveling output by the first recognition model can be spliced to obtain the spliced image. When the stitched image of the target vehicle suspected of illegally traveling is obtained, the stored stitched image can be directly called to obtain the stitched image.
And S102, carrying out segmentation processing on the spliced image to obtain a plurality of segmented images.
The segmentation image can be obtained by segmenting the spliced image, and the similarity between each segmentation image and each road image in the spliced image is greater than a preset threshold value. When the stitched image is segmented to obtain a plurality of segmented images, different regions on the stitched image may be segmented into corresponding segmented images by receiving a segmentation instruction input by a user, or the stitched image may be input into a segmentation model for segmenting the image to obtain a plurality of segmented images, which is not limited in the embodiment of the present application.
S103, detecting whether each segmented image is a close-up image or not, deleting the close-up image from the segmented images, and determining whether the target vehicle illegally runs or not by using the rest segmented images.
The image of the traffic tool with the size larger than the preset size threshold in the sketch map has a certain probability, and the focal length of the image acquisition device needs to be adjusted to obtain an image of the traffic tool with the size larger than the preset size threshold, so that the characteristic of the traffic tool can be identified more accurately. Because the size of the vehicle in the close-up image is larger than the preset size threshold, the proportion of the area in the image occupied by the vehicle is larger, the area in the image occupied by the corresponding road mark is smaller, and when judging whether the vehicle is illegally driven, the relative position relationship between the vehicle and the road mark is required to determine. Therefore, the accuracy is relatively low when judging whether the vehicle is illegally driven through the close-up map.
On the basis of the embodiment, when the terminal obtains the split images corresponding to the spliced image of the target vehicle, whether the close-up image exists in each split image can be detected, when the close-up image exists, the close-up image is deleted from the split images, and whether the target vehicle illegally runs is determined by using the rest split images. When detecting whether a close-up image exists in each segmented image, whether the segmented image is a close-up image can be determined by receiving a selection instruction input by a user, and whether the segmented image is a close-up image can also be determined by inputting each segmented image into a preset recognition model, which is not limited by the embodiment of the application. The terminal may input the remaining segmented images into the second recognition model, and determine whether the target vehicle in the remaining segmented images illegally travels, it should be noted that the second recognition model for determining whether the target vehicle in the remaining segmented images illegally travels may be the same recognition model as the first recognition model or a different recognition model, which is not limited in this embodiment.
In the image processing method, the terminal acquires the spliced image of the target vehicle suspected of illegally traveling, performs segmentation processing on the spliced image to obtain a plurality of segmented images, further detects whether each segmented image is a close-up image, deletes the close-up image from the segmented images, and determines whether the target vehicle illegally travels by using the rest of the segmented images, wherein the spliced image is spliced by a plurality of captured road images of the target vehicle, so that when performing secondary review on the spliced image of the target vehicle suspected of illegally traveling, secondary review determination is performed on other segmented images after removing most of the close-up image occupied by the target vehicle in the image, and the region occupied by the target vehicle in the other segmented images is small, so that whether the target vehicle illegally traveling is illegal or not can be more clearly determined, the accuracy of whether the determined target vehicle illegally runs is improved, and the accuracy of secondary verification of the spliced image is further improved.
When the terminal detects whether each segmented image is a close-up image, optionally, whether each segmented image is a close-up image can be detected according to the image characteristic information of each segmented image; wherein the image characteristic information includes at least one of a size of the target vehicle, a number of vehicles in the segmented image, a background image of the segmented image, and a road sign in the segmented image.
In one embodiment, detecting whether each segmented image is a close-up image according to the image feature information of each segmented image may be determining whether the segmented image is a close-up image according to the size of the target vehicle in the segmented image. Optionally, for each of the segmented images, detecting whether the size of the target vehicle in the segmented image is larger than a preset size threshold, and if the size of the target vehicle in the segmented image is larger than the preset size threshold, determining that the segmented image is a close-up image.
Wherein, when the image characteristic information indicates the size of the target vehicle in the image, the size of the target vehicle in the segmented image can be determined by a detection method based on YOLO _ V3. Alternatively, when the YOLO _ V3 detection method is used to determine the size of the target vehicle in the segmented image, the processing speed of the YOLO _ V3 detection method can be increased by using the basic network of MobileNet _ V3. After the size of the target vehicle in the segmented image is obtained, the preset size threshold value can be called, and the segmented image corresponding to the target vehicle with the size larger than the preset size threshold value is determined to be a close-up image. In one possible scenario, if there is no segmented image corresponding to a target vehicle having a size greater than a preset size threshold, then there is no close-up in the segmented image.
According to the image processing method, whether the size of the target vehicle in each segmented image is larger than the preset size threshold value or not is detected, and if the size of the target vehicle in each segmented image is larger than the preset size threshold value, the segmented image is determined to be the close-up image, so that when the target vehicle is determined to illegally run, the area occupied by the target vehicle in the image is smaller, whether the target vehicle illegally runs or not can be clearly judged, the accuracy of determining whether the target vehicle illegally runs or not is improved, and the accuracy of performing secondary verification on the spliced image is improved.
In one embodiment, whether each segmented image is a close-up image is determined by detecting whether each segmented image is a close-up image based on image feature information of each segmented image, which may be the number of vehicles in the image. Optionally, for each of the segmented images, detecting whether the number of vehicles in the segmented image is smaller than a preset number threshold, and if the number of vehicles in the segmented image is smaller than the preset number threshold, determining that the segmented image is a close-up image.
Wherein, when the image feature information indicates the number of vehicles in the image, the number of vehicles in the segmented image may be determined by a detection method based on YOLO _ V3. Alternatively, when the YOLO _ V3 detection method is adopted to determine the number of vehicles in the segmented image, the processing speed of the YOLO _ V3 detection method can be increased by using the basic network of MobileNet _ V3. When the number of vehicles in the segmented image is larger, the proportion of the image area occupied by the target vehicle is smaller, that is, the probability that the segmented image is a close-up image is lower as the number of vehicles in the segmented image is larger. After the number of vehicles in the segmented image is obtained, a pre-stored preset number threshold value can be called, and the segmented image with the number of vehicles smaller than the preset number threshold value is determined to be a close-up image. In one possible scenario, where there are no segmented images with a number of vehicles less than a preset number threshold, then there is no close-up image in the plurality of segmented images.
According to the image processing method, whether the number of the vehicles in the segmented image is smaller than a preset number threshold value or not is detected, if the number of the vehicles in the segmented image is smaller than the preset number threshold value, the segmented image is determined to be the close-up image, so that when the target vehicles are determined to illegally run, the number of the vehicles in the image is smaller than the preset number threshold value, namely the proportion of the target vehicles in the image area is smaller, therefore, whether the target vehicles illegally run or not can be clearly judged, the accuracy of whether the determined target vehicles illegally run or not is improved, and the accuracy of secondary verification of the spliced image is improved.
In one embodiment, as shown in fig. 3, when detecting whether each of the divided images is a close-up image based on the image feature information of each of the divided images, it may also be determined whether the divided image is a close-up image based on the corresponding background image of the divided image. As shown in fig. 3, one possible implementation method of the above "detecting whether each segmented image is a close-up image according to the image feature information of each segmented image" includes:
s201, extracting a corresponding background image from each of the segmented images.
The background image may be an image corresponding to a region other than the target vehicle in the segmented image, the region corresponding to the target vehicle in the segmented image may be set to a same pixel value to obtain the background image, or the region corresponding to the target vehicle in the segmented image may be removed to obtain the background image, which is not limited in this embodiment of the present application. It should be noted that, in general, when the image capturing device captures the reason image, the focal length of the image capturing device is adjusted to obtain the ordinary road image and the close-up map, and therefore, if there is no close-up map in the road image captured by the same image capturing device, the background areas corresponding to the road images are the same background. Alternatively, the terminal may obtain the background image corresponding to the segmented image by setting the pixels of the target vehicle in each segmented image to be the same pixel value, as shown in fig. 4, one possible implementation method of the above-mentioned S201 "extracting the background image from each segmented image respectively" includes the following steps:
s301, position information of the target vehicle in each segmented image is obtained, and a position information set of the target vehicle on each segmented image is obtained.
When the image feature information indicates a background image corresponding to the segmented image, the region in which the target vehicle is located in the segmented image can be determined by a detection method based on YOLO _ V3. Alternatively, when the YOLO _ V3 detection method is adopted to determine the region where the target vehicle is located in the segmented image, the processing speed of the YOLO _ V3 detection method can be increased by using the basic network of MobileNet _ V3. If the split images obtained by splitting the stitched image are multiple, the position information of the target vehicle in each split image can be respectively determined, and the position information of the target vehicle in each split image is collected to obtain the position information collection of the target vehicle on each split image. For example, when the stitched image is segmented to obtain 3 segmented images, the area where the target vehicle is located in the segmented image 1 is: the rectangular area with the X-axis coordinate range of (50,100) and the Y-axis coordinate range of (50,100) is divided into the area where the target vehicle is located in the image 2: the rectangular area with the X-axis coordinate range (60,110) and the Y-axis coordinate range (60,110) is divided into the area where the target vehicle is located in image 3: and obtaining a position information set of the target vehicle on each segmented image according to the 3 regions, wherein the X-axis coordinate range is (30,300), and the Y-axis coordinate range is (30,300).
S302, setting the pixels of the area corresponding to the position information set on each segmented image to be the same preset value, and obtaining the background image corresponding to each segmented image.
After the position information sets on the segmented images are obtained, the pixels of the regions corresponding to the position information sets on the segmented images can be set to be the same preset value, and the background images corresponding to the segmented images are obtained. Continuing to take the example of dividing the image into 3, wherein the area where the target vehicle is located in the divided image 1 is as follows: the rectangular area with the X-axis coordinate range of (50,100) and the Y-axis coordinate range of (50,100) is divided into the area where the target vehicle is located in the image 2: the rectangular area with the X-axis coordinate range (60,110) and the Y-axis coordinate range (60,110) is divided into the area where the target vehicle is located in image 3: if the X-axis coordinate range is (30,300), the Y-axis coordinate range is (30,300), then the X-axis coordinate range is (50,100), the rectangular area with the Y-axis coordinate range (50,100), the X-axis coordinate range is (60,110), the rectangular area with the Y-axis coordinate range (60,110) and the X-axis coordinate range are (30,300), and the pixels of the rectangular area with the Y-axis coordinate range (30,300) are set to preset values, so as to obtain the background image corresponding to the segmented image 1; setting pixels of a rectangular area with an X-axis coordinate range of (50,100), a rectangular area with a Y-axis coordinate range of (50,100), an X-axis coordinate range of (60,110), a rectangular area with a Y-axis coordinate range of (60,110) and an X-axis coordinate range of (30,300) in the segmentation image 2 to be preset values, and obtaining a background image corresponding to the segmentation image 2; setting pixels of a rectangular area with an X-axis coordinate range of (50,100), a rectangular area with a Y-axis coordinate range of (50,100), an X-axis coordinate range of (60,110), a rectangular area with a Y-axis coordinate range of (60,110) and a rectangular area with an X-axis coordinate range of (30,300) in the segmentation image 3 to preset values, and obtaining a background image corresponding to the segmentation image 3.
S202, comparing the background images extracted from the segmented images, and determining whether a target background image exists in each background image according to a comparison result, wherein the difference between the target background image and other background images is larger than a preset difference threshold value.
On the basis of the above embodiment, when the background images corresponding to the respective segmented images are obtained, the respective background images may be compared pairwise, so as to obtain a comparison result between the respective background images. When comparing each background image two by two, the pixel value of each point in one background image is subtracted from the pixel value of the corresponding point in the other background image to obtain the pixel difference between the two background images, which is used as the comparison result of the two background images. If a background image exists and the pixel difference value between the background image and other background images is larger than a preset difference threshold value, the background image is the target background image.
And S203, determining the segmentation image corresponding to the target background image as a close-up image.
The image processing method comprises the steps of extracting corresponding background images from the segmented images, comparing the background images extracted from the segmented images, determining whether a target background image with a difference larger than a preset difference threshold value exists in each background image according to a comparison result, and determining the segmented image corresponding to the target background image as a close-up image. And the segmented image with the background image different from the background images of other segmented images is taken as a close-up image, namely the image with the focal length different from that of the other images when the image is acquired is taken as the close-up image, so that the accuracy of the determined close-up image is improved, and the accuracy of secondary checking whether the target vehicle runs illegally according to other segmented images except the close-up image is further improved.
In one embodiment, as shown in fig. 5, when detecting whether each segmented image is a close-up image according to the image feature information of each segmented image, it may also be determined whether the segmented image is a close-up image according to the road identification in the segmented image. As shown in fig. 5, one possible implementation method of the above "detecting whether each of the divided images is a close-up image according to the image feature information of each of the divided images" includes:
s401, identifying the road mark in each segmentation image.
The road mark may refer to a marked line on the road, such as a double yellow line, a single yellow line, a solid line, or a dashed line. When the image characteristic information indicates the road mark in the image, the DFANet network can be used for carrying out segmentation recognition on the road mark in the image, and determining which marked line in the road is specifically a double yellow line, a single yellow line, a solid line or a dotted line, so as to obtain the road mark in each segmented image.
S402, obtaining the similarity between every two road marks identified from each segmentation image, and determining whether a target segmentation image exists in each segmentation image according to the similarity, wherein the similarity between the road mark of the target segmentation image and the road marks of other segmentation images is smaller than a preset similarity threshold.
Similar to the above-described embodiment, when a road image is captured, the close-up view is an image with a different focal length from the other road images, and therefore, the road sign may be significantly different from the other road images in the close-up view. After the road identifiers in each segmented image are identified, the road identifiers of each segmented image can be compared pairwise to obtain the similarity between the two road identifiers of each segmented image. If the similarity between the road mark of one segmented image and the road mark of other segmented images is smaller than a preset similarity threshold, the segmented image is the target segmented image.
And S403, determining the target segmentation image as a close-up image.
The image processing method comprises the steps of identifying the road mark in each segmented image, obtaining the similarity of the road mark identified from each segmented image, determining whether the road mark of each segmented image has a target segmented image with the similarity of the road mark of other segmented images smaller than a preset similarity threshold according to the similarity, and determining the target segmented image as a close-up image. And taking the target segmentation image with the similarity between the road identifier and the road identifiers of other segmentation images smaller than the preset similarity threshold as a close-up image, namely taking the image with the focal length different from that of the other images when the images are acquired as the close-up image, so that the accuracy of the determined close-up image is improved, and the accuracy of secondary checking whether the target vehicle runs illegally according to the other segmentation images except the close-up image is further improved.
In the process of detecting whether each of the divided images is a close-up image based on the image feature information of each of the divided images, whether the divided image is a close-up image may be determined by using any of the embodiments described above alone, or may be determined by combining any of the embodiments described above. Whether the segmented image is a close-up image can also be determined by all the embodiments described above, which are not limited by the embodiments of the present application.
On the basis of the above embodiment, the terminal may obtain a plurality of segmented images by inputting the stitched image into a preset segmentation model, which is described in detail below with reference to fig. 6.
Fig. 6 is a schematic flow chart of an image processing method in another embodiment, which relates to a specific process of how a terminal performs segmentation processing on a spliced image to obtain a plurality of segmented images, and as shown in fig. 6, one possible implementation method of the above-mentioned S102 "performing segmentation processing on a spliced image to obtain a plurality of segmented images" includes the following steps:
s501, segmenting the spliced image according to a preset segmentation mode to obtain a plurality of pre-segmented images; the preset segmentation mode is used for indicating the corresponding size of each pre-segmentation image.
The preset segmentation mode can segment the spliced image into a plurality of pre-segmentation images which are divided into a plurality of equal parts through indication, and the corresponding size of each pre-segmentation image is indicated. It should be noted that the preset segmentation mode may include a plurality of different sizes corresponding to the pre-segmentation images. For example, the size of the stitched image is 240 × 160, and the preset segmentation pattern may include sizes corresponding to 3 types of pre-segmented images, which are respectively two-fold equal division of the stitched image in the horizontal direction to obtain two pre-segmented images of 120 × 160; indicating to trisect the spliced image in the horizontal direction to obtain three 60 × 160 pre-segmentation images; indicating to divide the spliced image into two equal parts in the horizontal direction and the vertical direction respectively to obtain four 120 × 80 pre-segmentation images; a total of 9 pre-segmented images were obtained.
S502, inputting the pre-segmentation images into a preset boundary recognition model to obtain a plurality of segmentation images.
The preset boundary identification model may be a neural network model, and is configured to determine a segmented image in the plurality of pre-segmented images by obtaining a boundary matching degree between the boundaries of the plurality of pre-segmented images and an actual road image. For example, the preset boundary recognition model may be an SKNet model. The boundary matching degree of the boundary of each pre-segmentation image and the actual road image is obtained by inputting the pre-segmentation images into an SKNet model, and then the segmentation images in the pre-segmentation images are determined.
The image processing method comprises the steps of segmenting a spliced image according to a preset segmentation mode to obtain a plurality of pre-segmented images; the preset segmentation mode is used for indicating the corresponding size of each pre-segmentation image, the pre-segmentation images are input into the preset boundary recognition model, and the segmentation images are obtained, so that the obtained segmentation images are obtained through the preset boundary recognition model, the accuracy of the obtained segmentation images is improved, and the accuracy of determining whether the target vehicle runs illegally according to the segmentation images except the close-up image is further improved.
Fig. 7 is a schematic flowchart of an image processing method in another embodiment, as shown in fig. 7, the method includes the following steps:
s601, obtaining a spliced image of a target vehicle suspected of illegally driving. The spliced image is obtained by splicing a plurality of captured road images of the target vehicle.
And S602, carrying out segmentation processing on the spliced image to obtain a plurality of segmented images.
S603, detecting whether the size of the target vehicle in the segmented image is larger than a preset size threshold value, if so, determining the segmented image as a close-up image, and executing S607; if not, go to S604.
S604, detecting whether the number of the vehicles in the segmented image is smaller than a preset number threshold, if so, determining the segmented image as a close-up image, and executing S607; if not, go to S605.
S605, acquiring the similarity of the road marks recognized from the segmented images, determining whether a target segmented image with the similarity smaller than a preset similarity threshold exists, if so, taking the target segmented image as a close-up image, and executing S607; if not, go to S606.
S606, comparing the background images extracted from the segmented images, determining whether an image with a background image difference value larger than a preset difference value threshold exists, if so, determining the image as a close-up image, and executing S607, otherwise, executing S608.
And S607, deleting the close-up image from the plurality of segmented images, and determining whether the target vehicle illegally runs by using the rest segmented images.
And S608, determining whether the target vehicle illegally runs by using the segmented image.
The present principles and technical effects of the image processing method are similar to those of the method embodiments, and are not described herein again.
It should be understood that although the various steps in the flow charts of fig. 2-7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-7 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
Fig. 8 is a schematic structural diagram of an image processing apparatus provided in an embodiment, and as shown in fig. 8, the image processing apparatus includes: an acquisition module 10, a segmentation module 20 and a determination module 30, wherein:
the acquisition module 10 is configured to acquire a stitched image of a target vehicle suspected of illegal driving, where the stitched image is obtained by stitching a plurality of captured road images of the target vehicle;
a segmentation module 20, configured to perform segmentation processing on the spliced image to obtain a plurality of segmented images;
and the determining module 30 is used for detecting whether each segmented image is a close-up image, deleting the close-up image from the plurality of segmented images, and determining whether the target vehicle illegally runs by using the rest segmented images.
In one embodiment, determination module 30 is specifically configured to detect whether each segmented image is a close-up image based on image feature information of each segmented image; wherein the image characteristic information includes at least one of a size of the target vehicle, a number of vehicles in the segmented image, a background image of the segmented image, and a road sign in the segmented image.
In one embodiment, the determining module 30 is specifically configured to detect whether the size of the target vehicle in each of the segmented images is greater than a preset size threshold, and determine that the segmented image is a close-up image if the size of the target vehicle in the segmented image is greater than the preset size threshold.
In one embodiment, the determining module 30 is specifically configured to detect whether the number of vehicles in each segmented image is greater than a preset number threshold, and determine that the segmented image is a close-up image if the number of vehicles in the segmented image is greater than the preset number threshold.
In one embodiment, the determining module 30 is specifically configured to respectively extract corresponding background images from the segmented images; comparing background images extracted from the segmented images, and determining whether a target background image exists in each background image according to a comparison result, wherein the difference between the target background image and other background images is greater than a preset difference threshold value; and determining the segmentation image corresponding to the target background image as a close-up image.
In one embodiment, the determining module 30 is specifically configured to obtain the position information of the target vehicle in each segmented image, and obtain a position information set of the target vehicle on each segmented image; and setting the pixels of the area corresponding to the position information set on each segmented image as the same preset value to obtain the background image corresponding to each segmented image.
In one embodiment, the determination module 30 is specifically configured to identify a road sign in each segmented image; acquiring the similarity of road marks recognized from each segmented image, and determining whether a target segmented image exists in each segmented image according to the similarity, wherein the similarity between the target segmented image and other segmented images is smaller than a preset similarity threshold; the target segmented image is determined to be a close-up image.
In an embodiment, the segmentation module 20 is specifically configured to segment the stitched image according to a preset segmentation mode to obtain a plurality of pre-segmented images; the preset segmentation mode is used for indicating the corresponding size of each pre-segmentation image; and inputting the pre-segmentation images into a preset boundary identification model to obtain a plurality of segmentation images.
The image processing apparatus provided in the embodiment of the present application may implement the method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
For the specific definition of an image processing apparatus, reference may be made to the above definition of the image processing method, which is not described herein again. The respective modules in the image processing apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal device, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an image processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, there is provided a terminal device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring a spliced image of a target vehicle suspected to run illegally, wherein the spliced image is obtained by splicing a plurality of captured road images of the target vehicle;
performing segmentation processing on the spliced image to obtain a plurality of segmented images;
and detecting whether each segmented image is a close-up image or not, deleting the close-up image from the plurality of segmented images, and determining whether the target vehicle illegally runs or not by using the rest segmented images.
In one embodiment, the processor, when executing the computer program, further performs the steps of: detecting whether each segmented image is a close-up image according to the image characteristic information of each segmented image; wherein the image characteristic information includes at least one of a size of the target vehicle, a number of vehicles in the segmented image, a background image of the segmented image, and a road sign in the segmented image.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and for each segmented image, detecting whether the size of the target vehicle in the segmented image is larger than a preset size threshold value, and if the size of the target vehicle in the segmented image is larger than the preset size threshold value, determining that the segmented image is a close-up image.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and for each segmented image, detecting whether the number of vehicles in the segmented image is larger than a preset number threshold, and if the number of vehicles in the segmented image is larger than the preset number threshold, determining that the segmented image is a close-up image.
In one embodiment, the processor, when executing the computer program, further performs the steps of: extracting corresponding background images from the segmentation images respectively; comparing background images extracted from the segmented images, and determining whether a target background image exists in each background image according to a comparison result, wherein the difference between the target background image and other background images is greater than a preset difference threshold value; and determining the segmentation image corresponding to the target background image as a close-up image.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring the position information of the target vehicle in each segmented image to obtain a position information set of the target vehicle on each segmented image; and setting the pixels of the area corresponding to the position information set on each segmented image as the same preset value to obtain the background image corresponding to each segmented image.
In one embodiment, the processor, when executing the computer program, further performs the steps of: identifying a road mark in each segmented image; acquiring the similarity of road marks recognized from each segmented image, and determining whether a target segmented image exists in each segmented image according to the similarity, wherein the similarity between the target segmented image and other segmented images is smaller than a preset similarity threshold; the target segmented image is determined to be a close-up image.
In one embodiment, the processor, when executing the computer program, further performs the steps of: segmenting the spliced image according to a preset segmentation mode to obtain a plurality of pre-segmented images; the preset segmentation mode is used for indicating the corresponding size of each pre-segmentation image; and inputting the pre-segmentation images into a preset boundary identification model to obtain a plurality of segmentation images.
The implementation principle and technical effect of the terminal device provided in this embodiment are similar to those of the method embodiments described above, and are not described herein again.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a spliced image of a target vehicle suspected to run illegally, wherein the spliced image is obtained by splicing a plurality of captured road images of the target vehicle;
performing segmentation processing on the spliced image to obtain a plurality of segmented images;
and detecting whether each segmented image is a close-up image or not, deleting the close-up image from the plurality of segmented images, and determining whether the target vehicle illegally runs or not by using the rest segmented images.
In one embodiment, the computer program when executed by the processor implements the steps of: detecting whether each segmented image is a close-up image according to the image characteristic information of each segmented image; wherein the image characteristic information includes at least one of a size of the target vehicle, a number of vehicles in the segmented image, a background image of the segmented image, and a road sign in the segmented image.
In one embodiment, the computer program when executed by the processor implements the steps of: and for each segmented image, detecting whether the size of the target vehicle in the segmented image is larger than a preset size threshold value, and if the size of the target vehicle in the segmented image is larger than the preset size threshold value, determining that the segmented image is a close-up image.
In one embodiment, the computer program when executed by the processor implements the steps of: and for each segmented image, detecting whether the number of vehicles in the segmented image is larger than a preset number threshold, and if the number of vehicles in the segmented image is larger than the preset number threshold, determining that the segmented image is a close-up image.
In one embodiment, the computer program when executed by the processor implements the steps of: extracting corresponding background images from the segmentation images respectively; comparing background images extracted from the segmented images, and determining whether a target background image exists in each background image according to a comparison result, wherein the difference between the target background image and other background images is greater than a preset difference threshold value; and determining the segmentation image corresponding to the target background image as a close-up image.
In one embodiment, the computer program when executed by the processor implements the steps of: acquiring the position information of the target vehicle in each segmented image to obtain a position information set of the target vehicle on each segmented image; and setting the pixels of the area corresponding to the position information set on each segmented image as the same preset value to obtain the background image corresponding to each segmented image.
In one embodiment, the computer program when executed by the processor implements the steps of: identifying a road mark in each segmented image; acquiring the similarity of road marks recognized from each segmented image, and determining whether a target segmented image exists in each segmented image according to the similarity, wherein the similarity between the target segmented image and other segmented images is smaller than a preset similarity threshold; the target segmented image is determined to be a close-up image.
In one embodiment, the computer program when executed by the processor implements the steps of: segmenting the spliced image according to a preset segmentation mode to obtain a plurality of pre-segmented images; the preset segmentation mode is used for indicating the corresponding size of each pre-segmentation image; and inputting the pre-segmentation images into a preset boundary identification model to obtain a plurality of segmentation images.
The implementation principle and technical effect of the computer-readable storage medium provided by this embodiment are similar to those of the above-described method embodiment, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (11)

1. An image processing method, characterized in that the method comprises:
acquiring a spliced image of a target vehicle suspected to run illegally, wherein the spliced image is obtained by splicing a plurality of captured road images of the target vehicle;
performing segmentation processing on the spliced image to obtain a plurality of segmented images;
and detecting whether each segmented image is a close-up image or not, deleting the close-up image from the plurality of segmented images, and determining whether the target vehicle illegally runs by using the rest of the segmented images.
2. The method of claim 1, wherein said detecting whether each of said segmented images is a close-up image comprises:
detecting whether each segmented image is a close-up image according to the image characteristic information of each segmented image; wherein the image feature information includes at least one of a size of the target vehicle, a number of vehicles in a segmented image, a background image of the segmented image, and a road sign in the segmented image.
3. The method of claim 2, wherein the detecting whether each of the segmented images is a close-up image according to the image feature information of each of the segmented images comprises:
for each segmented image, detecting whether the size of the target vehicle in the segmented image is larger than a preset size threshold value, and if the size of the target vehicle in the segmented image is larger than the preset size threshold value, determining that the segmented image is a close-up image.
4. The method of claim 2, wherein the detecting whether each of the segmented images is a close-up image according to the image feature information of each of the segmented images comprises:
for each segmented image, detecting whether the number of vehicles in the segmented image is larger than a preset number threshold value, and if the number of vehicles in the segmented image is smaller than the preset number threshold value, determining that the segmented image is a close-up image.
5. The method of claim 2, wherein the detecting whether each of the segmented images is a close-up image according to the image feature information of each of the segmented images comprises:
extracting corresponding background images from the segmentation images respectively;
comparing background images extracted from the segmented images, and determining whether a target background image exists in each background image according to a comparison result, wherein the difference between the target background image and other background images is greater than a preset difference threshold value;
and determining the segmentation image corresponding to the target background image as a close-up image.
6. The method of claim 5, wherein the extracting a background image from each of the segmented images comprises:
acquiring position information of the target vehicle in each segmented image to obtain a position information set of the target vehicle on each segmented image;
and setting the pixels of the area corresponding to the position information set on each segmented image as the same preset value to obtain the background image corresponding to each segmented image.
7. The method of claim 2, wherein the detecting whether each of the segmented images is a close-up image according to the image feature information of each of the segmented images comprises:
identifying a road sign in each segmented image;
acquiring similarity between every two road identifiers identified from each segmented image, and determining whether a target segmented image exists in each segmented image according to each similarity, wherein the similarity between the road identifiers in the target segmented image and the road identifiers in other segmented images is smaller than a preset similarity threshold;
determining the target segmentation image as a close-up image.
8. The method according to any one of claims 1 to 6, wherein the segmenting the stitched image into a plurality of segmented images comprises:
segmenting the spliced image according to a preset segmentation mode to obtain a plurality of pre-segmented images; the preset segmentation mode is used for indicating the corresponding size of each pre-segmentation image;
and inputting the pre-segmentation images into a preset boundary recognition model to obtain the segmentation images.
9. An image processing apparatus, characterized in that the apparatus comprises:
the system comprises an acquisition module, a judgment module and a display module, wherein the acquisition module is used for acquiring a spliced image of a target vehicle suspected to run illegally, and the spliced image is obtained by splicing a plurality of captured road images of the target vehicle;
the segmentation module is used for carrying out segmentation processing on the spliced image to obtain a plurality of segmented images;
and the determining module is used for detecting whether each segmented image is a close-up image, deleting the close-up image from the plurality of segmented images and determining whether the target vehicle illegally runs by using the rest segmented images.
10. A terminal device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method according to any of claims 1-8.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN202010098364.3A 2020-02-18 2020-02-18 Image processing method, device, equipment and storage medium Pending CN111340837A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010098364.3A CN111340837A (en) 2020-02-18 2020-02-18 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010098364.3A CN111340837A (en) 2020-02-18 2020-02-18 Image processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111340837A true CN111340837A (en) 2020-06-26

Family

ID=71181673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010098364.3A Pending CN111340837A (en) 2020-02-18 2020-02-18 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111340837A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220215643A1 (en) * 2021-01-07 2022-07-07 Canon Kabushiki Kaisha Image processing device, imaging device, image processing method, and recording medium
CN114782459A (en) * 2022-06-21 2022-07-22 山东极视角科技有限公司 Spliced image segmentation method, device and equipment based on semantic segmentation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140363089A1 (en) * 2013-06-10 2014-12-11 Nk Works Co., Ltd. Information processing apparatus for watching, information processing method and non-transitory recording medium recording program
CN107241645A (en) * 2017-06-09 2017-10-10 成都索贝数码科技股份有限公司 A kind of method that splendid moment of scoring is automatically extracted by the subtitle recognition to video
CN109325405A (en) * 2018-08-10 2019-02-12 北京奇艺世纪科技有限公司 A kind of mask method of lens type, device and equipment
AU2017265110A1 (en) * 2017-11-23 2019-06-06 Canon Kabushiki Kaisha Method for segmenting video using background model learned with pixelwise adaptive learning rate
CN110766077A (en) * 2019-10-24 2020-02-07 上海眼控科技股份有限公司 Method, device and equipment for screening sketch in evidence chain image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140363089A1 (en) * 2013-06-10 2014-12-11 Nk Works Co., Ltd. Information processing apparatus for watching, information processing method and non-transitory recording medium recording program
CN107241645A (en) * 2017-06-09 2017-10-10 成都索贝数码科技股份有限公司 A kind of method that splendid moment of scoring is automatically extracted by the subtitle recognition to video
AU2017265110A1 (en) * 2017-11-23 2019-06-06 Canon Kabushiki Kaisha Method for segmenting video using background model learned with pixelwise adaptive learning rate
CN109325405A (en) * 2018-08-10 2019-02-12 北京奇艺世纪科技有限公司 A kind of mask method of lens type, device and equipment
CN110766077A (en) * 2019-10-24 2020-02-07 上海眼控科技股份有限公司 Method, device and equipment for screening sketch in evidence chain image

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220215643A1 (en) * 2021-01-07 2022-07-07 Canon Kabushiki Kaisha Image processing device, imaging device, image processing method, and recording medium
US11854239B2 (en) * 2021-01-07 2023-12-26 Canon Kabushiki Kaisha Image processing device, imaging device, image processing method, and recording medium
CN114782459A (en) * 2022-06-21 2022-07-22 山东极视角科技有限公司 Spliced image segmentation method, device and equipment based on semantic segmentation
CN114782459B (en) * 2022-06-21 2022-08-30 山东极视角科技有限公司 Spliced image segmentation method, device and equipment based on semantic segmentation

Similar Documents

Publication Publication Date Title
CN110414507B (en) License plate recognition method and device, computer equipment and storage medium
CN110390666B (en) Road damage detection method, device, computer equipment and storage medium
CN110634153A (en) Target tracking template updating method and device, computer equipment and storage medium
CN110929589B (en) Method, apparatus, computer apparatus and storage medium for identifying vehicle characteristics
CN110781859B (en) Image annotation method and device, computer equipment and storage medium
CN110706261A (en) Vehicle violation detection method and device, computer equipment and storage medium
CN110826484A (en) Vehicle weight recognition method and device, computer equipment and model training method
CN111461170A (en) Vehicle image detection method and device, computer equipment and storage medium
CN110598687A (en) Vehicle identification code detection method and device and computer equipment
CN111161202A (en) Vehicle behavior information acquisition method and device, computer equipment and storage medium
CN111192392B (en) Identity verification method and device, computer equipment and computer-readable storage medium
CN111340837A (en) Image processing method, device, equipment and storage medium
CN111144372A (en) Vehicle detection method, device, computer equipment and storage medium
CN116168351B (en) Inspection method and device for power equipment
CN112307989A (en) Method and device for identifying road surface object, computer equipment and storage medium
CN112183321A (en) Method and device for optimizing machine learning model, computer equipment and storage medium
CN111414914A (en) Image recognition method and device, computer equipment and storage medium
WO2018058573A1 (en) Object detection method, object detection apparatus and electronic device
CN111091041A (en) Vehicle law violation judging method and device, computer equipment and storage medium
CN112036232B (en) Image table structure identification method, system, terminal and storage medium
CN103227864B (en) Method of detecting image edge and mobile terminal
CN112580457A (en) Vehicle video processing method and device, computer equipment and storage medium
JP6028972B2 (en) Image processing apparatus, image processing method, and image processing program
CN108364024B (en) Image matching method and device, computer equipment and storage medium
JP2016053763A (en) Image processor, image processing method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination