CN111416916B - Object position judging circuit - Google Patents
Object position judging circuit Download PDFInfo
- Publication number
- CN111416916B CN111416916B CN201910006476.9A CN201910006476A CN111416916B CN 111416916 B CN111416916 B CN 111416916B CN 201910006476 A CN201910006476 A CN 201910006476A CN 111416916 B CN111416916 B CN 111416916B
- Authority
- CN
- China
- Prior art keywords
- frame
- circuit
- image signal
- image processing
- frames
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
Abstract
The invention discloses an object position judging circuit, which comprises a receiving circuit and a detecting circuit. In the operation of the object position judging circuit, the receiving circuit receives an image signal; and the detecting circuit detects the position of an object in an nth frame of the image signal, determines a partial region in an (N + M) th frame of the image signal according to the position of the object in the nth frame, and detects only the partial region to determine the position of the object in the (N + M) th frame, wherein N, M is a positive integer.
Description
Technical Field
The present invention relates to image processing, and more particularly to a circuit for determining a position of a specific object in an image.
Background
In the current face recognition system, a deep learning or Neural Network (Neural Network) method is used to analyze and process an image to recognize the position of a face in the image. However, since the deep learning artificial intelligence module requires a higher computation amount, the burden of the artificial intelligence module may be exceeded in the case of a larger image data content, or an engineer needs to design an artificial intelligence module with higher capability, thereby increasing the cost of design and manufacture.
Disclosure of Invention
Therefore, one of the objectives of the present invention is to provide an object position determining circuit, which can detect the position of an object in only a part of the area of the subsequent frame according to the detection result of the previous frame, so as to reduce the burden of the artificial intelligence module.
In an embodiment of the present invention, an object position determining circuit is disclosed, which includes a receiving circuit and a detecting circuit. In the operation of the object position judging circuit, the receiving circuit receives an image signal; and the detecting circuit detects a position of an object in an nth frame of the image signal, determines a partial region in an (N + M) th frame of the image signal according to the position of the object in the nth frame, and detects only the partial region to determine the position of the object in the (N + M) th frame, wherein N, M is a positive integer.
In another embodiment of the present invention, a circuit structure including an object position determining circuit and an image processing circuit is disclosed. The object position judging circuit comprises a receiving circuit, a detecting circuit and an output circuit, wherein the receiving circuit receives an image signal; and the detecting circuit detects the position of an object in an nth frame of the image signal, determines a partial region in an (N + M) th frame of the image signal according to the position of the object in the nth frame, and detects only the partial region to determine the position of the object in the (N + M) th frame, wherein N, M is a positive integer; and the output circuit respectively outputs a coordinate range in the Nth frame and the (N + M) th frame as the position of the object. The image processing circuit is used for receiving the image signal and processing the image signal according to the coordinate range in the Nth image frame and the (N + M) th image frame so as to generate a plurality of output images to a display panel and display the output images on the display panel.
Drawings
Fig. 1 is a schematic diagram of an electronic device according to an embodiment of the invention.
Fig. 2 is a timing diagram illustrating the operation of the object position determining circuit according to an embodiment of the invention.
FIG. 3 is a diagram illustrating an object detection operation according to a first embodiment of the present invention.
FIG. 4 is a diagram illustrating an object detection operation according to a second embodiment of the present invention.
FIG. 5 is a diagram illustrating an object detection operation according to a third embodiment of the present invention.
FIG. 6 is a diagram illustrating an image processing circuit performing image processing according to object position information.
FIG. 7 is a flowchart illustrating an image processing method according to an embodiment of the invention.
Description of the symbols:
110 | |
120 | Object |
122 | |
124 | |
128 | |
130 | |
610、620、630 | |
622、632 | Region(s) |
700~708 | Step (ii) of |
Din | Image signal |
I0~I10 | Picture frame |
I5’~I9’ | Part of the area of a picture frame |
F4~F10 | Position information |
Detailed Description
Fig. 1 is a schematic diagram of an electronic device according to an embodiment of the invention. As shown in fig. 1, the electronic device includes an image processing circuit 110, an object position determining circuit 120 and a display panel 130, wherein the object position determining circuit 120 includes a receiving circuit 122, a detecting circuit 124 and an output circuit 128. In the present embodiment, the image processing circuit 110 and the object position determining circuit 120 may be integrated in a single chip, and the electronic device may be any electronic device including a display panel, such as a desktop computer, a notebook computer, or a mobile device.
In operation of the electronic device, the image processing circuit 110 is used for receiving an image signal Din and generating a plurality of output images (frames) to the display panel 130 for displaying thereon. Meanwhile, the object position determining circuit 120 determines the position of an object (e.g., a human face) in a plurality of frames of the image signal Din according to the content of the image signal Din, and generates object position information (e.g., the area coordinates of the object in each frame) to the image processing circuit 110, so that the image processing circuit 110 processes the image signal Din. However, since the object position determination in the image is performed by a deep learning or neural network, which requires a high amount of computation, the processing capability of the object position determination circuit 120 cannot perform the object position detection determination for each frame in real time (for example, the time for the object position determination circuit 120 to process a complete frame may be equal to the time for the display panel 130 to display 4-5 frames), so that the object position determination circuit 120 in this embodiment performs the object position detection only according to a part of the area in the frame, so as to provide enough object position information for the image processing circuit 110 in case of meeting the processing capability of the object position determination circuit 120.
Specifically, please refer to fig. 1 and 2 simultaneously, wherein fig. 2 is a timing diagram illustrating an operation of the object position determining circuit 120 according to an embodiment of the invention. As shown in fig. 2, it is assumed that the time for the object position determining circuit 120 to process a complete frame is approximately equal to the time for the display panel 130 to display 4-5 frames, therefore, in the process that the image processing circuit 110 receives and processes the frames I0-I3, the detecting circuit 124 skips the frames I0-I3, and directly detects the object position of the entire content of the frame I4, so as to determine the position of the object in the frame I4, and output a coordinate range as the position information F4 of the object in the frame I4. Then, at least one of the frames I5-I8 determines a partial region of the at least one of the frames I5-I8 according to the position information F4 (i.e., the coordinate range) of the object in the frame I4, and the detection circuit 124 detects only the partial region to determine the position of the object in the at least one of the frames I5-I8. In this embodiment, the detecting circuit 124 at least generates the position information F8 of the object in the frame I8, and the detecting circuit 124 then determines a portion of the region I9 'of the frame I9 according to the position information F8 of the object in the frame I8, and the detecting circuit 124 only detects the portion of the region I9' to determine the position of the object in the frame I9.
In this embodiment, the detecting circuit 124 can determine whether to perform object detection on at least one of the frames I5-I7 or directly skip the frames I5-I7 according to the size or the occupied proportion of the object in the frame I4. For example, referring to the first embodiment shown in FIG. 3, assuming that the size or percentage of the object in the frame I4 is below a threshold, the detection circuit 124 may perform object detection on each of the frames I5-I8 to generate object location information. In detail, the detection circuit 124 may select a portion of the area I5' of the frame I5 for object detection (the remaining area is not for object detection) according to the object detection result (position information F4) of the frame I4, selecting a part of the region I6' of the frame I6 for object detection (the remaining region for no object detection) based on the object detection result (position information F5) of the frame I5, selecting a part of the region I7' of the frame I7 for object detection (the remaining region for no object detection) based on the object detection result (position information F6) of the frame I6, selecting a portion of the area I8' of the frame I8 for object detection (the remaining area is not for object detection) based on the object detection result (position information F7) of the frame I7, and selecting a portion of the region I9' of the frame I9 for object detection (the remaining region is not for object detection) … according to the object detection result (position information F8) of the frame I8, and so on. In the embodiment, considering the movement of the object, the area selected and object detected by each of the frames I6-I9 is not smaller than the area selected and object detected by the previous frame, for example, the frames I5-I8 may select an area of 10% of the size of one frame for object detection, and the frame I9 may select an area of 50% of the size of one frame for object detection.
Referring to the second embodiment shown in fig. 4, assuming that the size or the occupied proportion of the object in the frame I4 is higher than a threshold value, the detection circuit 124 may perform object detection only on the frames I7-I8 to generate the object position information F7, F8, while the frames I5, I6 do not perform the object detection described in this embodiment. In detail, the detecting circuit 124 may select a portion of the region I7 'of the frame I7 for object detection (the remaining region is not subject to object detection) according to the object detection result (position information F4) of the frame I4, select a portion of the region I8' of the frame I8 for object detection (the remaining region is not subject to object detection) according to the object detection result (position information F7) of the frame I7, select a portion of the region I9 for object detection (the remaining region is not subject to object detection) according to the object detection result (position information F8) of the frame I8, and so on …. In the embodiment, considering the movement of the object, the area selected and object detected by each of the frames I8-I9 is not smaller than the area selected and object detected by the previous frame, for example, the frames I7-I8 may select an area of 20% of the size of one frame for object detection, and the frame I9 may select an area of about 50% of the size of one frame for object detection.
Referring to the third embodiment shown in fig. 5, if the size or the occupied proportion of the object in the frame I4 is too high, the detection circuit 124 may perform object detection only on the frame I8 to generate the object position information F8, and the frames I5 to I7 do not perform the object detection described in this embodiment. In detail, the detection circuit 124 may select a portion of the area I8' of the frame I8 for object detection (the remaining area is not for object detection) according to the object detection result (position information F4) of the frame I4, select a portion of the area of the frame I9 for object detection (the remaining area is not for object detection) … according to the object detection result (position information F8) of the frame I8, and so on. In the present embodiment, considering the movement of the object, the area selected and object detected in the frame I9 is not smaller than the area selected and object detected in the previous frame, for example, the frame I8 may select an area of 40% of the size of a frame for object detection, and the frame I9 may select an area of about 50% of the size of a frame for object detection.
The detection circuit 124 operates similarly to the above embodiments for the object detection of the frames I10-I14 in the frames I5-I9, that is, at least one of the frames I10-I13 determines a portion of the at least one of the frames I10-I13 according to the position information F9 of the object in the frame I9, and the detection circuit 124 detects only the portion of the at least one of the frames I10-I13 to determine the position of the object in the at least one of the frames I10-I13. In this embodiment, the detecting circuit 124 at least generates the position information of the object in the frame I13, and the detecting circuit 124 then determines a part of the region of the frame I14 according to the position information of the object in the frame I13, and the detecting circuit 124 detects only the part of the region to determine the position … of the object in the frame I14, and so on.
Returning to fig. 2, the frames I0-I10 are sequentially processed by the image processing circuit 110 and then transmitted to the display panel 130 for playing, so the object position determining circuit 120 can detect the object position of the frame I4 when the frame I0 is processed by the image processing circuit 110, and temporarily store the position information F4 of the object in the frame I4, and the object position determining circuit 120 sequentially detects the object positions of the frames I5-I8 when the frame I4 is processed by the image processing circuit 110, so as to transmit the position information F5-F8 of the object in the frames I5-I8 to the image processing circuit 110, so that the image processing circuit 110 can process the image processing circuit 110 according to the position information. In addition, considering the movement of the object, the frame I9 selects a larger area for object detection, so the object position determining circuit 120 can detect the object position in a part of the area I9' of the frame I9 when the frame I7 is processed by the image processing circuit 110, and transmit the position information F9 of the object in the frame I9 to the image processing circuit 110 for processing.
As described in the above embodiments, since only a portion of the blocks I5-I9 need to be subject to object detection, the detection circuit 124 can perform object detection determination for each or most of the blocks in a manner that is consistent with deep learning or neural network computing capabilities, so as to more effectively perform object detection.
In one embodiment, the image processing circuit 110 may add a pattern to the frame to mark the object according to the object position information from the object position determining circuit 120. To illustrate with reference to fig. 6, assuming that the detected object is a human face, the object position determining circuit 120 transmits the object detection result (i.e., the illustrated coordinate range 610) of the frame I4 to the image processing circuit 110 as the position information of the object, and the image processing circuit 110 adds a rectangular frame to the frame I4 to mark the position of the object. Then, the object position determining circuit 120 selects an area 622 from the frame I5 for object detection according to the coordinate range 610, wherein the area 622 includes the coordinate range 610, and in one embodiment, the area 622 and the coordinate range 610 have the same center position (center coordinate); the object position determining circuit 120 then transmits the object detection result (i.e., the illustrated coordinate range 620) of the frame I5 to the image processing circuit 110 as the position information of the object, and the image processing circuit 110 adds a rectangular frame to the frame I5 to mark the position of the object. Then, the object position determining circuit 120 selects an area 632 from the frame I6 for object detection according to the coordinate range 620, wherein the area 632 includes the coordinate range 620, and in one embodiment, the area 632 and the coordinate range 620 have the same center position (center coordinate); the object position determining circuit 120 then transmits the object detection result (i.e., the illustrated coordinate range 630) of the frame I6 to the image processing circuit 110 as the position information of the object, and the image processing circuit 110 adds a rectangular frame to the frame I6 to mark the position of the object.
In another embodiment, the image processing circuit 110 may apply different image processing methods to different areas in the frame according to the object position information from the object position determining circuit 120. For example, in fig. 6, for the frame I4, the image processing circuit 120 may apply a first image processing method (e.g., contrast, color adjustment …, etc.) to the face portion in the region 610, and apply a second image processing method to the portion outside the region 610. Similarly, for the frames I5 and I6, the image processing circuit 120 may apply a first image processing method (e.g., contrast, color adjustment …, etc.) to the face portions in the regions 620 and 630, and apply a second image processing method to the portions outside the regions 620 and 630.
FIG. 7 is a flowchart illustrating an image processing method according to an embodiment of the invention. With reference to the above disclosure, the image processing method flows as follows.
Step 700: the process begins.
Step 702: an image signal is received.
Step 704: detecting a position of an object in an nth frame of the video signal, determining a partial region in an (N + M) th frame of the video signal according to the position of the object in the nth frame, and detecting only the partial region to determine the position of the object in the (N + M) th frame, wherein N, M is a positive integer.
Step 706: respectively outputting a coordinate range in the Nth frame and the (N + M) th frame as the position of the object.
Step 708: processing the image signal according to the coordinate range in the Nth image frame and the (N + M) th image frame to generate a plurality of output images to a display panel and displaying the output images on the display panel.
Briefly summarizing the present invention, in the circuit and the image processing method of the present invention, the object detection is performed only for a partial region of a frame, so that the detection circuit can perform the object detection and judgment for each frame or most frames under the condition of meeting the computation capability of deep learning or neural network, thereby more effectively completing the object detection operation and reducing the burden of the circuit on the image recognition.
The above description is only a preferred embodiment of the present invention, and all equivalent changes and modifications made in accordance with the claims of the present invention should be covered by the present invention.
Claims (10)
1. An object position determining circuit includes:
a receiving circuit for receiving an image signal; and
a detecting circuit, coupled to the receiving circuit, for detecting a position of an object in an nth frame of the image signal, determining a partial region of the image signal in an (N + M) th frame according to the position of the object in the nth frame, and detecting only the partial region to determine the position of the object in the (N + M) th frame to generate object position information, wherein N, M is a positive integer,
wherein, an image processing circuit is coupled with the object position judging circuit, the image processing circuit is used for receiving the image signal, processing the image signal according to the object position information from the object position judging circuit, and generating a plurality of output images to a display panel and displaying the output images on the display panel,
in the process that the image processing circuit receives and processes the 1 st to the N-1 th frames, the detection circuit skips the 1 st to the N-1 th frames and directly detects the object position of the whole content of the N frame to determine the position of the object in the N frame, wherein the 1 st to the (N + M) th frames are processed by the image processing circuit in sequence and then transmitted to the display panel for playing.
2. The object position determining circuit of claim 1, wherein the detecting circuit determines the size of the partial region in the (N + M) th frame according to the size or the occupied proportion of the object in the nth frame.
3. The object position determination circuit of claim 2, wherein the detection circuit determines the value of M according to the size or the percentage of the object in the Nth frame.
4. The object position determination circuit of claim 3, wherein when the size or the ratio of the object in the Nth frame is smaller than a threshold value, M is 1; and when the size or the occupied proportion of the object in the Nth frame is not less than the threshold value, M is a positive integer greater than 1.
5. The object position determination circuit of claim 4, wherein the detection circuit does not detect the position of the object in an (N +1) th frame of the image signal when the size or the percentage of the object in the Nth frame is not smaller than the threshold.
6. The object position determining circuit of claim 1, wherein the detecting circuit determines a partial region of an (N + M + K) th frame of the image signal according to the position of the object in the (N + M) th frame, and detects only the partial region of the (N + M + K) th frame to determine the position of the object in the (N + M + K) th frame, wherein K is a positive integer.
7. The object position determination circuit of claim 6, wherein the partial area of the (N + M + K) th frame is larger than the partial area of the (N + M) th frame.
8. A circuit, comprising:
an object position determining circuit, comprising:
a receiving circuit for receiving an image signal;
a detecting circuit, coupled to the receiving circuit, for detecting a position of an object in an nth frame of the image signal, determining a partial region of the image signal in an (N + M) th frame according to the position of the object in the nth frame, and detecting only the partial region to determine the position of the object in the (N + M) th frame, wherein N, M is a positive integer; and
an output circuit for outputting a coordinate range in the Nth frame and the (N + M) th frame as the position of the object; and
an image processing circuit, coupled to the object position determining circuit, for receiving the image signal and processing the image signal according to the coordinate ranges of the Nth frame and the (N + M) th frame to generate a plurality of output images to a display panel and display the output images thereon,
wherein, in the process that the image processing circuit receives and processes the 1 st to the N-1 th frames, the detection circuit skips the 1 st to the N-1 th frames and directly detects the object position of the whole content of the N frame to determine the position of the object in the N frame, wherein, the 1 st to the (N + M) th frames are sequentially processed by the image processing circuit and then transmitted to the display panel for playing.
9. The circuit of claim 8, wherein the image processing circuit marks the object by adding a pattern to the nth frame according to the coordinate range in the nth frame, and marks the object by adding the pattern to the (N + M) th frame according to the coordinate range in the (N + M) th frame, so as to generate the output image to the display panel and display the output image thereon.
10. The circuit of claim 8, wherein the image processing circuit processes images in the coordinate range of the N-th frame differently from other areas, and processes images in the coordinate range of the (N + M) -th frame differently from other areas, so as to generate and display the output image on the display panel.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910006476.9A CN111416916B (en) | 2019-01-04 | 2019-01-04 | Object position judging circuit |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910006476.9A CN111416916B (en) | 2019-01-04 | 2019-01-04 | Object position judging circuit |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111416916A CN111416916A (en) | 2020-07-14 |
CN111416916B true CN111416916B (en) | 2022-07-26 |
Family
ID=71493940
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910006476.9A Active CN111416916B (en) | 2019-01-04 | 2019-01-04 | Object position judging circuit |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111416916B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007214886A (en) * | 2006-02-09 | 2007-08-23 | Fujifilm Corp | Image processor |
CN107547826A (en) * | 2016-06-23 | 2018-01-05 | 吕嘉雄 | Picture frame analytical equipment |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE60025406T2 (en) * | 1999-03-02 | 2006-09-21 | Hitachi Denshi K.K. | Motion picture information display method and apparatus |
JP2003018434A (en) * | 2001-06-28 | 2003-01-17 | Olympus Optical Co Ltd | Imaging apparatus |
US7760956B2 (en) * | 2005-05-12 | 2010-07-20 | Hewlett-Packard Development Company, L.P. | System and method for producing a page using frames of a video stream |
US7860162B2 (en) * | 2005-09-29 | 2010-12-28 | Panasonic Corporation | Object tracking method and object tracking apparatus |
WO2013021275A1 (en) * | 2011-08-10 | 2013-02-14 | Yitzchak Kempinski | A method for optimizing size and position of a search window of a tracking system |
JP5978639B2 (en) * | 2012-02-06 | 2016-08-24 | ソニー株式会社 | Image processing apparatus, image processing method, program, and recording medium |
JP6432513B2 (en) * | 2013-08-23 | 2018-12-05 | 日本電気株式会社 | Video processing apparatus, video processing method, and video processing program |
US9183639B2 (en) * | 2013-09-26 | 2015-11-10 | Intel Corporation | Image frame processing including usage of acceleration data in assisting object location |
JP6655878B2 (en) * | 2015-03-02 | 2020-03-04 | キヤノン株式会社 | Image recognition method and apparatus, program |
JP6700872B2 (en) * | 2016-03-07 | 2020-05-27 | キヤノン株式会社 | Image blur correction apparatus and control method thereof, image pickup apparatus, program, storage medium |
-
2019
- 2019-01-04 CN CN201910006476.9A patent/CN111416916B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007214886A (en) * | 2006-02-09 | 2007-08-23 | Fujifilm Corp | Image processor |
CN107547826A (en) * | 2016-06-23 | 2018-01-05 | 吕嘉雄 | Picture frame analytical equipment |
Also Published As
Publication number | Publication date |
---|---|
CN111416916A (en) | 2020-07-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20190361658A1 (en) | Processing circuit of display panel, display method and display device | |
US9514540B2 (en) | Method and apparatus for processing a video frame in a video file | |
US8693791B2 (en) | Object detection apparatus and object detection method | |
KR20210030247A (en) | Translucent image watermark detection | |
CN112084959B (en) | Crowd image processing method and device | |
CN113887599A (en) | Screen light detection model training method, and ambient light detection method and device | |
CN111046957B (en) | Model embezzlement detection method, model training method and device | |
CN111145151A (en) | Motion area determination method and electronic equipment | |
TWI769603B (en) | Image processing method and computer readable medium thereof | |
CN111416916B (en) | Object position judging circuit | |
CN113393794A (en) | Gamma debugging method, device and equipment | |
KR20200096426A (en) | Moving body detecting device, moving body detecting method, and moving body detecting program | |
CN110992387A (en) | Image processing method and device, electronic equipment and storage medium | |
TWI692731B (en) | Object position determination circuit | |
CN113160126B (en) | Hardware Trojan detection method, hardware Trojan detection device, computer equipment and storage medium | |
CN113807407B (en) | Target detection model training method, model performance detection method and device | |
CN111339798B (en) | Object position judging circuit and electronic device | |
AU2021240277A1 (en) | Methods and apparatuses for classifying game props and training neural network | |
CN109753217B (en) | Dynamic keyboard operation method and device, storage medium and electronic equipment | |
CN113159058A (en) | Method and device for identifying image noise points | |
CN113160764A (en) | Method for controlling display screen and control circuit thereof | |
US11321832B2 (en) | Image analysis device | |
US10614290B1 (en) | Object position determination circuit | |
KR102224276B1 (en) | Method and Apparatus for Training of Image Data | |
US20230214418A1 (en) | Method and system of providing interface for visual question answering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |