CN113949830B - Image processing method - Google Patents

Image processing method Download PDF

Info

Publication number
CN113949830B
CN113949830B CN202111162368.4A CN202111162368A CN113949830B CN 113949830 B CN113949830 B CN 113949830B CN 202111162368 A CN202111162368 A CN 202111162368A CN 113949830 B CN113949830 B CN 113949830B
Authority
CN
China
Prior art keywords
image
information
determining
gray
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111162368.4A
Other languages
Chinese (zh)
Other versions
CN113949830A (en
Inventor
郭义明
吴应龙
郁启华
占磊
胡江海
黄光球
唐鑫鑫
邵书成
彭冬
李朝锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Guoneng Energy Development Co ltd
Guoneng Zhishen Control Technology Co ltd
State Energy Group Guangxi Electric Power Co ltd
Original Assignee
Guangxi Guoneng Energy Development Co ltd
Guoneng Zhishen Control Technology Co ltd
State Energy Group Guangxi Electric Power Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Guoneng Energy Development Co ltd, Guoneng Zhishen Control Technology Co ltd, State Energy Group Guangxi Electric Power Co ltd filed Critical Guangxi Guoneng Energy Development Co ltd
Priority to CN202111162368.4A priority Critical patent/CN113949830B/en
Publication of CN113949830A publication Critical patent/CN113949830A/en
Application granted granted Critical
Publication of CN113949830B publication Critical patent/CN113949830B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/68Noise processing, e.g. detecting, correcting, reducing or removing noise applied to defects
    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images

Abstract

The embodiment of the application discloses a processing method of image information. The method comprises the following steps: acquiring gray value theta of ith frame image i Wherein i is a positive integer; determining gray scale variation information between the i-th frame image information and an image preceding the i-th frame; and if the gray level change information does not meet the preset gray level change condition, determining to execute target detection operation on the ith frame image, wherein the gray level change condition is determined according to the light irradiation condition corresponding to the ith frame image acquisition time.

Description

Image processing method
Technical Field
The embodiment of the application relates to the field of information processing, in particular to an image processing method.
Background
There are many high voltage devices in a substation, which for safety reasons will be equipped with many cameras. These cameras monitor the image frames of the designated area at all times. Because of the large area of the power stations, the monitoring and checking are often carried out after the accident occurs, and the accident discovery has hysteresis. The camera captures an abnormal picture based on the collected image according to the change of light, and the accuracy is poor.
Disclosure of Invention
In order to solve any of the above technical problems, an embodiment of the present application provides an image processing method.
In order to achieve the object of the embodiment of the present application, the embodiment of the present application provides a method for processing image information, including:
acquiring gray value theta of ith frame image i Wherein i is a positive integer;
determining gray scale variation information between the i-th frame image information and an image preceding the i-th frame;
and if the gray level change information does not meet the preset gray level change condition, determining to execute target detection operation on the ith frame image, wherein the gray level change condition is determined according to the light irradiation condition corresponding to the ith frame image acquisition time.
A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the method described above when run.
An electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the method described above.
One of the above technical solutions has the following advantages or beneficial effects:
by obtaining the gray value theta of the ith frame image i And determining gray level change information between the image information of the ith frame and the image before the ith frame, and if the gray level change information does not meet a preset gray level change condition, determining to execute target detection operation on the image of the ith frame, so as to reduce false detection caused by light interference.
Additional features and advantages of embodiments of the application will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of embodiments of the application. The objectives and other advantages of embodiments of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the technical solution of the embodiments of the present application, and are incorporated in and constitute a part of this specification, illustrate and explain the technical solution of the embodiments of the present application, and not to limit the technical solution of the embodiments of the present application.
FIG. 1 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 2 is another flowchart of an image processing method according to an embodiment of the present application;
FIG. 3 is a flow chart of a method for determining whether an image is changed in the method shown in FIG. 2;
FIG. 4 is a flow chart of a target tracking method in the method of FIG. 2.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the embodiments of the present application will be described in detail hereinafter with reference to the accompanying drawings. It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be arbitrarily combined with each other.
Fig. 1 is a flowchart of a method for processing image information according to an embodiment of the present application. As shown in fig. 1, the method shown in fig. 1 includes:
step 101, obtaining the gray value theta of the ith frame image i
Wherein i is a positive integer;
in an exemplary embodiment, gray value distribution information in a picture of a current frame is acquired, and pixels of a gray image of the current frame can be traversed; the gray value of each pixel point is accumulated and summed; calculating the total number n of pixels in the image; an average gray value sum/n of the image is obtained.
102, determining gray level change information between the ith frame of image information and an image before the ith frame;
the gray scale variation information may be determined by comparing with gray scale values of images of one or more frames prior to the i-th frame, and gray scale variation information may be determined.
In an exemplary embodiment, the image before the i-th frame is an image of N consecutive frames before the i-th frame, N being a positive integer. Since the N consecutive frames are the nearest image frames to the i-th frame and are most similar to the gray values of the i-th frame, the interference of external light can be effectively eliminated by comparing the gray values of the N consecutive frames with the gray values of the images of the N consecutive frames, and accurate gray change information can be obtained.
In an exemplary embodiment, the image before the ith frame is an image of N discontinuous frames before the ith frame, where the images of the N discontinuous frames are all within a preset acquisition period. For example, in the mth acquisition period, an image frame acquired in the mth acquisition period may be selected as a reference object to represent external light information during the period.
Step 103, if the gray level change information does not meet a preset gray level change condition, determining to execute target detection operation on the ith frame image, wherein the gray level change condition is determined according to a light irradiation condition corresponding to the collection time of the ith frame image;
because the positions set by each image acquisition device are different, the light irradiation conditions are different, and the light irradiation conditions corresponding to different acquisition times can be determined according to the acquired image information, so that the corresponding gray value change conditions are determined, and the set light irradiation conditions are more in line with the environment set by the image acquisition device.
For example, the gray value of an image acquired by the image acquisition apparatus in the acquisition time of the i-th frame for a period of time (e.g., one week or one month) may be acquired, the maximum value and the minimum value of the gray therein are determined, and the gray change condition is determined based on the maximum value and the minimum value.
For example, the gradation change condition is a threshold value, which can be determined from the difference between the maximum value and the minimum value.
The gradation change condition is determined based on a gradation change value affected by external light. If the gray level change information does not meet the gray level change condition, the gray level change information indicates that the image acquisition operation is interfered by the presence of people or objects, and a target detection function needs to be executed; otherwise, the gray level change caused by the normal light change is indicated, and the target detection function is not required to be executed.
Because the gray level change condition is set, the target detection operation is not triggered by the gray level value change caused by the light change, so that the false detection caused by the light interference is effectively reduced.
The method provided by the embodiment of the application obtains the gray value theta of the ith frame image i Determining gray scale change information between the ith frame image information and an image preceding the ith frame if the gray scale change information does not satisfy a presetAnd determining to execute target detection operation on the ith frame image, thereby reducing false detection caused by light interference.
The following describes the method provided by the embodiment of the application:
in an exemplary embodiment, the determining gray scale variation information between the i-th frame image information and the image before the i-th frame includes:
acquiring gray values of images of at least two frames before an ith frame;
determining an average value V of gray values of images of the at least two frames i
Calculating the gray value θ i And average value V i And the difference value between the two values to obtain gray level change information.
The gray value of the image of each frame before the ith frame can be realized by adopting the implementation manner in the step 101, and the average value is obtained by averaging the gray values of at least two frames, so that the realization is simple and convenient.
In one exemplary embodiment, when the at least two frames are N consecutive frames before the i-th frame, the average value V is obtained by calculating the expression as follows i Comprising:
V i =βV i-1 +(1-β)θ i
wherein, beta is more than 0 and less than 1, V i-1 An average value of gradation values corresponding to images of N consecutive frames preceding the i-1 th frame is represented.
Wherein, beta represents weight, and the size can be set according to actual needs.
By adopting the mode, the gray value information of N continuous frames before the ith frame can be more accurately represented, and the accuracy of subsequent judgment is improved.
In the process of realizing the application, cameras in the prior art are found to have a face detection function, but the detection of small animals and people on the back and side is limited, and the cameras are either fixed and do not need to be adjusted manually. Aiming at the discovery, the method provided by the embodiment of the application can effectively shield the abnormal capturing interference of the light change of the lens and simultaneously can automatically track and adjust the angle of the camera at the detected target position.
In an exemplary embodiment, after determining whether to perform the object detection operation on the i-th frame image according to the change information, the method further includes:
determining the position information of a target in an ith frame image;
and adjusting the acquisition angle of the image according to the position information.
Based on the position information of the target in the ith frame of image, the acquisition angle of the image acquisition equipment is adjusted, so that the aim of automatic tracking is fulfilled.
In an exemplary embodiment, the adjusting the acquisition angle of the image according to the position information includes:
judging whether the position information meets the boundary condition of an ith frame image or not;
if the position information meets the boundary condition of the ith frame of image, determining a target boundary corresponding to the position information;
and adjusting the acquisition angle of the image according to the target boundary.
Wherein the boundary condition may be represented by coordinate information of the image. If the position information is in the coordinate range corresponding to the boundary condition, the boundary condition of the ith frame of image is indicated, which indicates that the target is possibly far away from the acquisition range of the image acquisition equipment along with the increase of time, so that the direction of the target which is required to be far away is required to be determined, namely the target boundary is determined, and then the acquisition angle is adjusted according to the target boundary, so that the target can be ensured to be in the acquisition range.
In an exemplary embodiment, the determining the target boundary corresponding to the location information includes:
if the position information of at least two targets meets the boundary condition of the ith frame of image, the boundary information corresponding to each target;
and determining the object boundary according to the number of the objects on the same boundary.
If a plurality of targets are detected to be at the boundary of the image acquisition range, determining the direction in which each target is far away, namely determining the boundary of each target; the boundary with the largest number of targets can be selected as the target boundary according to the number of targets on each boundary.
In an exemplary embodiment, the importance order of the objects may be determined, and the boundary corresponding to the object with the highest importance is taken as the object boundary. The importance sequence may request external setting, or may be determined according to a rule preset locally. The rule set is the target type, and the order of importance from high to low is "driving equipment (e.g., unmanned plane, automobile, etc.), human, animal".
In an exemplary embodiment, after determining the position information of the object in the i-th frame image, the method includes:
determining size information of the position information in an ith frame image;
and detecting the target of the acquired new image according to the size information.
The size of the target can be determined by determining the size information corresponding to the position information, and the target is identified according to the size of the target, so that the follow-up target tracking efficiency can be improved, and the workload of image identification can be reduced.
The following describes a method provided by the embodiment of the present application with application examples:
the application example of the application uses statistical distribution to judge whether the gray values of the current frame and the previous 10 frames are within a fixed threshold value, and replaces the abnormal detection judgment of the light method. False alarms caused by light can be filtered well; tracking the target by using a method that the image after target detection is matched with the target of the frame image, and dynamically comparing and tracking the coordinate position after the target is detected so as to reduce the pressure detected by the camera target; the camera can be dynamically and adaptively adjusted in the tracking process from target detection, and the camera can be initialized to the originally set position once target tracking is finished.
Fig. 2 is another flowchart of an image processing method according to an embodiment of the present application. As shown in fig. 2, the method includes:
s1, acquiring a terminal image. And obtaining the video stream from the terminal camera.
S2, whether the image is changed or not. The difference of each frame of image is analyzed when capturing the video flow graph of S1. And if the difference exceeds a given threshold, intercepting the image output of the current frame, otherwise, continuing to wait for the next frame.
S3, target detection. The detection model is designed according to the target to be detected in actual need, and models using a deep learning method, such as YOLO series and SSD series, are recommended.
S4, judging whether the target exists or not. And (3) judging the result detected in the step (S3), if no target is detected, returning to continue waiting for the corresponding camera, and if the target is detected, carrying out the step (S5) for further judgment.
S5, target tracking. Detecting that an object is detected for object detection and tracking the recorded object.
Fig. 3 is a flowchart of a method for determining whether an image is changed in the method shown in fig. 2. As shown in fig. 3, the method includes:
s21, the gray value of the current frame picture. Traversing pixels of a frame gray image to be processed; the gray value of each pixel point is accumulated and summed; calculating the total number n of pixels in the image; the average gray value sum/n of the image is calculated.
S22, comparing the current gray value distribution with the gray value distribution of the previous 10 frames.
Comparing the value of S21 with the images of the previous 10 frames, and calculating a moving average value V by using the average gray value of the previous 10 frames t Then subtracting the moving average V from the current previous gray value t Obtaining change information, wherein the change information is subjected to exponentially weighted moving average according to the time sequence, and the calculation mode is as follows:
V t =βV t-1 +(1-β)θ t
where t=1, 2,3,4,5,6,7,8,9,10, β is weight typically 0.9, θ t Is the true value at time t. .
S23, judging whether the change information is larger than a threshold value.
If not, the current gray value is saved, and the first gray value is deleted.
If yes, S24 is executed.
S24, outputting the current frame picture.
FIG. 4 is a flow chart of a target tracking method in the method of FIG. 2. As shown in fig. 4, the method includes:
s51, clipping the current frame image. After the target is detected, the current frame image coordinates (x n1 ,y n1 ,x n2 ,y n2 )。
S52, judging whether the boundary exists. By means of coordinates (x n1 ,y n1 ,x n2 ,y n2 ) Comparing with the range coordinates (0, w, h) of the image, returning to yes if the boundary is close, otherwise returning to no. And returns the coordinates of the target. If multiple targets appear to border in different directions, the coordinates of the direction in which the targets are most can be offset for return according to the weight of the targets (which targets are pre-determined to be preferred).
S53, adjusting the lens. And (3) according to the return result of the step (S52), linking the mobile interface of the terminal camera, and adjusting the corresponding angle to enable the target to be kept in the lens.
S54, outputting a result. And returning the detection result of the step S51.
S55, acquiring a current frame image. And S4, acquiring a movement detection image returned by the step S4.
S56, storing the detection result. The detection result of S51 is saved. And tracking the target in combination with the current frame image acquired in S55.
S57, sliding comparison detection. The result of S56 is used to perform overall sliding window cropping on the image of S55. Each crop is sampled at a distance of 10 pixels each time on the S55 image depending on the size of the S56 image. And matching the content similarity of the sampled images. The matching modes are various, and a neural network or gray average matching can be used. The Hausdorff distance matching employed in the implementations herein.
S58, whether a detection target exists. After returning according to the detection result of S57, if no target is detected, S54 is performed. If so, coordinates of the upper left corner and the lower right corner of the acquisition target are calculated, and the processing returns to S51 for clipping.
The method provided by the application example can solve the problem of false alarm of light interference received by video monitoring in the current power station, and simultaneously combines deep learning and an artificial intelligence algorithm to detect a video monitoring target. After detecting the target, the camera is used for converting the target into a frame image for tracking the target appearing after the smooth detection, so that the calling times of the detection model are reduced; according to the angle of the adjusting camera of the position movement linkage of the target in the lens, the manual adjustment of the tracking target can be reduced.
An embodiment of the application provides a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the method as described in any of the preceding claims when run.
An embodiment of the application provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the method as described in any of the preceding claims.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, functional modules/units in the apparatus, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between the functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed cooperatively by several physical components. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.

Claims (8)

1. A processing method of image information, comprising:
acquiring gray value theta of ith frame image i Wherein i is a positive integer;
determining gray scale variation information between the i-th frame image information and an image preceding the i-th frame;
if the gray level change information does not meet the preset gray level change condition, determining to execute target detection operation on the ith frame image, wherein the gray level change condition is determined according to the light irradiation condition corresponding to the ith frame image acquisition time;
wherein the gradation change condition is obtained by the following means including:
acquiring a gray value of an image acquired in the acquisition time of an ith frame within a period of time, and determining a maximum value and a minimum value of gray values;
determining the gray scale variation condition according to the maximum value and the minimum value;
the determining gray scale variation information between the i-th frame image information and the image before the i-th frame includes:
acquiring gray values of images of at least two frames before an ith frame;
determining an average value V of gray values of images of the at least two frames i
Calculating the gray value θ i And average value V i Between which are locatedObtaining gray level change information;
wherein, when the at least two frames are N consecutive frames before the ith frame, an average value V is obtained by the following calculation expression i Comprising:
V i =βV i-1 +(1-β)θ i
wherein, beta is more than 0 and less than 1, V i-1 An average value of gradation values corresponding to images of N consecutive frames preceding the i-1 th frame is represented.
2. The method of claim 1, wherein the image preceding the ith frame is an image of N consecutive frames preceding the ith frame, where N is a positive integer.
3. The method according to claim 1, wherein after the determining that the object detection operation is performed on the i-th frame image, the method further comprises:
determining the position information of a target in an ith frame image;
and adjusting the acquisition angle of the image according to the position information.
4. A method according to claim 3, wherein said adjusting the acquisition angle of the image according to the position information comprises:
judging whether the position information meets the boundary condition of an ith frame image or not;
if the position information meets the boundary condition of the ith frame of image, determining a target boundary corresponding to the position information;
and adjusting the acquisition angle of the image according to the target boundary.
5. The method of claim 4, wherein determining the target boundary to which the location information corresponds comprises:
if the position information of at least two targets meets the boundary condition of the ith frame of image, acquiring boundary information corresponding to each target;
and determining the object boundary according to the number of the objects on the same boundary.
6. A method according to claim 3, wherein said determining the position information of the object in the i-th frame image is followed by:
determining size information of the position information in an ith frame image;
and detecting the target of the acquired new image according to the size information.
7. A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the method of any of claims 1 to 6 when run.
8. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of any of the claims 1 to 6.
CN202111162368.4A 2021-09-30 2021-09-30 Image processing method Active CN113949830B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111162368.4A CN113949830B (en) 2021-09-30 2021-09-30 Image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111162368.4A CN113949830B (en) 2021-09-30 2021-09-30 Image processing method

Publications (2)

Publication Number Publication Date
CN113949830A CN113949830A (en) 2022-01-18
CN113949830B true CN113949830B (en) 2023-11-24

Family

ID=79329656

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111162368.4A Active CN113949830B (en) 2021-09-30 2021-09-30 Image processing method

Country Status (1)

Country Link
CN (1) CN113949830B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120032178A (en) * 2010-09-28 2012-04-05 엘지디스플레이 주식회사 Light emitting display device and method for driving the same
CN102779272A (en) * 2012-06-29 2012-11-14 惠州市德赛西威汽车电子有限公司 Switching method for vehicle detection modes
CN108933897A (en) * 2018-07-27 2018-12-04 南昌黑鲨科技有限公司 Method for testing motion and device based on image sequence
CN109409238A (en) * 2018-09-28 2019-03-01 深圳市中电数通智慧安全科技股份有限公司 A kind of obstacle detection method, device and terminal device
CN109660736A (en) * 2017-10-10 2019-04-19 凌云光技术集团有限责任公司 Method for correcting flat field and device, image authentication method and device
CN110149486A (en) * 2019-05-17 2019-08-20 凌云光技术集团有限责任公司 A kind of automatic testing method, bearing calibration and the system of newly-increased abnormal point
CN111223129A (en) * 2020-01-10 2020-06-02 深圳中兴网信科技有限公司 Detection method, detection device, monitoring equipment and computer readable storage medium
CN111405218A (en) * 2020-03-26 2020-07-10 深圳市微测检测有限公司 Touch screen time delay detection method, system, device, equipment and storage medium
CN111724430A (en) * 2019-03-22 2020-09-29 株式会社理光 Image processing method and device and computer readable storage medium
CN111866383A (en) * 2020-07-13 2020-10-30 Oppo广东移动通信有限公司 Image processing method, terminal and storage medium
CN112132858A (en) * 2019-06-25 2020-12-25 杭州海康微影传感科技有限公司 Tracking method of video tracking equipment and video tracking equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4506882B2 (en) * 2008-06-27 2010-07-21 ソニー株式会社 Image processing apparatus and method, and program
CN109690611B (en) * 2016-09-29 2021-06-22 华为技术有限公司 Image correction method and device
US11134180B2 (en) * 2019-07-25 2021-09-28 Shenzhen Skyworth-Rgb Electronic Co., Ltd. Detection method for static image of a video and terminal, and computer-readable storage medium
JP7287210B2 (en) * 2019-09-19 2023-06-06 コニカミノルタ株式会社 Image processing device and program

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120032178A (en) * 2010-09-28 2012-04-05 엘지디스플레이 주식회사 Light emitting display device and method for driving the same
CN102779272A (en) * 2012-06-29 2012-11-14 惠州市德赛西威汽车电子有限公司 Switching method for vehicle detection modes
CN109660736A (en) * 2017-10-10 2019-04-19 凌云光技术集团有限责任公司 Method for correcting flat field and device, image authentication method and device
CN108933897A (en) * 2018-07-27 2018-12-04 南昌黑鲨科技有限公司 Method for testing motion and device based on image sequence
CN109409238A (en) * 2018-09-28 2019-03-01 深圳市中电数通智慧安全科技股份有限公司 A kind of obstacle detection method, device and terminal device
CN111724430A (en) * 2019-03-22 2020-09-29 株式会社理光 Image processing method and device and computer readable storage medium
CN110149486A (en) * 2019-05-17 2019-08-20 凌云光技术集团有限责任公司 A kind of automatic testing method, bearing calibration and the system of newly-increased abnormal point
CN112132858A (en) * 2019-06-25 2020-12-25 杭州海康微影传感科技有限公司 Tracking method of video tracking equipment and video tracking equipment
CN111223129A (en) * 2020-01-10 2020-06-02 深圳中兴网信科技有限公司 Detection method, detection device, monitoring equipment and computer readable storage medium
CN111405218A (en) * 2020-03-26 2020-07-10 深圳市微测检测有限公司 Touch screen time delay detection method, system, device, equipment and storage medium
CN111866383A (en) * 2020-07-13 2020-10-30 Oppo广东移动通信有限公司 Image processing method, terminal and storage medium

Also Published As

Publication number Publication date
CN113949830A (en) 2022-01-18

Similar Documents

Publication Publication Date Title
US11102417B2 (en) Target object capturing method and device, and video monitoring device
Sen-Ching et al. Robust techniques for background subtraction in urban traffic video
US20210110188A1 (en) Stereo imaging device
KR100792283B1 (en) Device and method for auto tracking moving object
US8818055B2 (en) Image processing apparatus, and method, and image capturing apparatus with determination of priority of a detected subject and updating the priority
EP2549738B1 (en) Method and camera for determining an image adjustment parameter
CN109299703B (en) Method and device for carrying out statistics on mouse conditions and image acquisition equipment
CN107886048A (en) Method for tracking target and system, storage medium and electric terminal
EP3641298B1 (en) Method and device for capturing target object and video monitoring device
JP7272024B2 (en) Object tracking device, monitoring system and object tracking method
CN101640788B (en) Method and device for controlling monitoring and monitoring system
US20060056702A1 (en) Image processing apparatus and image processing method
CN111462155B (en) Motion detection method, device, computer equipment and storage medium
CN109842787A (en) A kind of method and system monitoring throwing object in high sky
CN106327488B (en) Self-adaptive foreground detection method and detection device thereof
CN110570454A (en) Method and device for detecting foreign matter invasion
CN110555377B (en) Pedestrian detection and tracking method based on fish eye camera overlooking shooting
CN111242023A (en) Statistical method and statistical device suitable for complex light passenger flow
CN115953719A (en) Multi-target recognition computer image processing system
CN111241928B (en) Face recognition base optimization method, system, equipment and readable storage medium
EP3432575A1 (en) Method for performing multi-camera automatic patrol control with aid of statistics data in a surveillance system, and associated apparatus
US20240048672A1 (en) Adjustment of shutter value of surveillance camera via ai-based object recognition
CN113553992A (en) Escalator-oriented complex scene target tracking method and system
US11373277B2 (en) Motion detection method and image processing device for motion detection
CN113949830B (en) Image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant