CN115861603B - Method, device, equipment and medium for locking region of interest in infant care scene - Google Patents

Method, device, equipment and medium for locking region of interest in infant care scene Download PDF

Info

Publication number
CN115861603B
CN115861603B CN202211717671.0A CN202211717671A CN115861603B CN 115861603 B CN115861603 B CN 115861603B CN 202211717671 A CN202211717671 A CN 202211717671A CN 115861603 B CN115861603 B CN 115861603B
Authority
CN
China
Prior art keywords
points
point
interest
screening
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211717671.0A
Other languages
Chinese (zh)
Other versions
CN115861603A (en
Inventor
陈辉
熊章
张智
杜沛力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Xingxun Intelligent Technology Co ltd
Original Assignee
Ningbo Xingxun Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Xingxun Intelligent Technology Co ltd filed Critical Ningbo Xingxun Intelligent Technology Co ltd
Priority to CN202211717671.0A priority Critical patent/CN115861603B/en
Publication of CN115861603A publication Critical patent/CN115861603A/en
Application granted granted Critical
Publication of CN115861603B publication Critical patent/CN115861603B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the field of intelligent nursing, solves the problem that in the prior art, when a cradle head on nursing equipment rotates, a user interest area fails, and provides an interest area locking method, device, equipment and storage medium. The method comprises the following steps: acquiring a real-time video stream in an infant care scene, decomposing the video stream into multi-frame target images, presetting an interest area in the target images, and extracting an interest image corresponding to the interest area; screening all points in the target image and the interest image according to a preset rule, performing feature matching on the screened points meeting the requirements, and outputting matching points corresponding to the interest area; and obtaining a new region of interest according to each matching point, and realizing the locking of the region of interest. The invention accurately completes locking of the interest area preset by the user and realizes effective nursing of infants.

Description

Method, device, equipment and medium for locking region of interest in infant care scene
Technical Field
The invention relates to the field of intelligent nursing, in particular to a method, a device, equipment and a storage medium for locking an interest area.
Background
Along with the development and popularization of various intelligent terminals, the application of intelligent nursing equipment is also becoming more and more widespread, and gradually becomes a part of life of people.
In the prior art, when the intelligent nursing field of infants is involved, a user presets a nursing region of interest, but because the cradle head rotates, a target frame on a nursing device also rotates in an image, so that the original region of interest is invalid, and the user experience is affected.
The prior art Chinese patent CN111444948A discloses an image feature extraction and matching method which comprises the following steps: preliminary screening is carried out on the characteristic points according to the difference between the target pixel points and the surrounding points in gray scale to obtain candidate corner points; and performing secondary screening on the obtained candidate corner points by utilizing gradients in X and Y directions in the candidate corner points. In the technical scheme, the first screening is performed only according to the condition of the difference between the target pixel point and the surrounding points in gray scale, the second screening is performed only on candidate corner points, the condition that some points are lost is difficult to avoid, the accuracy of the final output point is affected, and the accuracy of the locking of the region is finally affected.
In summary, how to accurately lock the region of interest of the user when the cradle head of the nursing device rotates is a problem to be solved.
Disclosure of Invention
In view of the above, the embodiments of the present invention provide a method, an apparatus, a device, and a storage medium for locking an interest area, so as to solve the problem in the prior art that the locking of the interest area fails due to rotation of a pan/tilt.
In a first aspect, an embodiment of the present invention provides a method for locking a region of interest, where the method includes:
s1: acquiring a real-time video stream in an infant care scene, decomposing the video stream into multi-frame target images, presetting an interest area in the target images, and extracting an interest image corresponding to the interest area;
s2: screening all points in the target image and the interest image according to a preset rule, performing feature matching on the screened points meeting the requirements, and outputting matching points corresponding to the interest area;
s3: and obtaining a new region of interest according to each matching point, and realizing the locking of the region of interest.
Preferably, the S2 includes:
s21: converting the interest image into a corresponding gray level image P1, and converting the target image into a corresponding gray level image P2;
s22: performing sharpness analysis on all gray points in the gray images P1 and P2, and outputting first-time qualified point screening and first-time unqualified point screening;
s23: performing secondary screening on the first-time screening unqualified points according to a preset supplementary screening rule, and outputting second-time screening qualified points;
s24: taking the first screening non-qualified points and the second screening qualified points as target points, and respectively extracting a first characteristic set M of the target points in the gray level image P1 and a second characteristic set N of the target points in the P2;
s25: and performing feature matching on the target point in the gray image P1 and the target point in the gray image P2 according to the first feature set M and the second feature set N, and outputting a matching point corresponding to the region of interest.
Preferably, the S22 includes:
s221: acquiring gray values of all gray points in the gray image P1 and the gray image P2;
s222: acquiring a point A in the gray level image P1 and the gray level image P2, respectively calculating gray level differences between the point A and a plurality of points around the point A, and extracting gray level qualified points in the points around the point A according to the gray level differences;
s223: counting the number of the gray scale qualified points, and presetting a number threshold num, wherein num is a positive integer, and judging whether the number of the gray scale qualified points is larger than or equal to the number threshold num;
s224: if the number of the gray scale qualified points is greater than or equal to the number threshold num, the point A is a first screening qualified point;
s225: if the number of the gray scale qualified points is smaller than the number threshold num, the point A is a first screening unqualified point;
s226: repeating steps S221 to S225 for all gray points in the gray images P1 and P2, respectively, and outputting all first-time qualified point screening and first-time unqualified point screening.
Preferably, the S23 includes:
s231: obtaining a point B in the first screening unqualified points, repeating the steps S221 to S225 on points in a preset area around the point B, and outputting points meeting the sharpness requirement;
s232: determining the distance between the point meeting the sharpness requirement and the point B, and taking the point meeting the sharpness requirement closest to the point as a second screening qualified point;
s233: and repeating steps S231 to S232 for all points in the first screening non-qualified points respectively, and outputting all second screening qualified points.
Preferably, the S24 includes:
s241: acquiring any point C in the target point;
s242: determining average gray values of all points in a preset square area around the point C, and obtaining a feature vector according to the average gray values and combining the gray values of each point in the square area;
s243: steps S241 to S242 are repeated for all target points in the grayscale images P1 and P2, respectively, and the feature vector set obtained from the target point in the grayscale image P1 is used as the first feature set M, and the feature vector set obtained from the target point in the grayscale image P2 is used as the second feature set M.
Preferably, the S25 includes:
s251: selecting any one feature vector S (A) in the feature set M and any one feature vector S (B) in the feature set N;
s252: calculating a cross-correlation coefficient S (a, B) ncc from the feature vectors S (a) and S (B), wherein-1= < S (a, B) ncc < = 1;
s253: presetting a matching threshold, and outputting a point corresponding to the feature vector S (B) as a matching point when the cross-correlation coefficient S (A, B) ncc is larger than the matching threshold;
s254: steps S251 to S252 are repeated for all feature vectors in the feature sets M and N, and all matching points in the feature set N are output.
Preferably, the S3 includes:
s31: acquiring all matching points in the feature set N and inputting the matching points into the real-time video stream;
s32: and drawing all matching points in the feature set N on the next frame of image in the real-time video stream to form a new region of interest, wherein the targets contained in the new region of interest and the preset region of interest are the same.
In a second aspect, an embodiment of the present invention further provides a region of interest locking device, where the device includes:
the image acquisition module is used for acquiring a real-time video stream in an infant care scene, decomposing the video stream into a plurality of frame target images, setting an interest area in the target images in advance, and extracting an interest image corresponding to the interest area;
the matching point extraction module is used for screening all points in the target image and the interest image according to a preset rule, carrying out feature matching on the screened points meeting the requirements, and outputting each matching point corresponding to the interest area;
and the interest region locking module is used for obtaining a new interest region according to each matching point and realizing the locking of the interest region.
In a third aspect, an embodiment of the present invention further provides an electronic device, including: at least one processor, at least one memory and computer program instructions stored in the memory, which when executed by the processor, implement the method as in the first aspect of the embodiments described above.
In a fourth aspect, embodiments of the present invention also provide a storage medium having stored thereon computer program instructions which, when executed by a processor, implement a method as in the first aspect of the embodiments described above.
In summary, the beneficial effects of the invention are as follows:
according to the method, the device, the equipment and the storage medium for locking the interest area, which are provided by the embodiment of the invention, the real-time video stream in the infant care scene is obtained, the video stream is decomposed into a plurality of frames of target images, the interest area is preset in the target images, and the interest image corresponding to the interest area is extracted; screening all points in the target image and the interest image according to a preset rule, performing feature matching on the screened points meeting the requirements, and outputting matching points corresponding to the interest area; and obtaining a new region of interest according to each matching point, and realizing the locking of the region of interest. And in the process, compared with the prior art, the screening process avoids the loss of qualified points, ensures the screening accuracy, ensures the matching accuracy in the characteristic matching process, improves the accuracy of a new interest region which is finally output by the matched points, accurately completes region locking, and further realizes effective nursing of infants.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described, and it is within the scope of the present invention to obtain other drawings according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart showing the overall operation of the region of interest locking method in embodiment 1 of the present invention;
FIG. 2 is a schematic flow chart of extracting matching points in embodiment 1 of the present invention;
FIG. 3 is a schematic flow chart of the first preliminary screening in example 1 of the present invention;
FIG. 4 is a schematic flow chart of the second additional screening performed in example 1 of the present invention;
FIG. 5 is a flow chart of extracting feature vectors in embodiment 1 of the present invention;
FIG. 6 is a schematic flow chart of feature matching in embodiment 1 of the present invention;
FIG. 7 is a flow chart of obtaining a new region of interest according to all the matching points in the embodiment 1 of the present invention to realize the locking of the region of interest;
FIG. 8 is a schematic diagram of a preset region of interest in embodiment 1 of the present invention;
FIG. 9 is a schematic diagram of a locked region of interest in embodiment 1 of the present invention;
FIG. 10 is a block diagram showing the structure of the interesting area locking means in embodiment 2 of the present invention;
fig. 11 is a schematic diagram of the structure of an electronic device in embodiment 3 of the present invention.
Detailed Description
Features and exemplary embodiments of various aspects of the present invention will be described in detail below, and in order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely configured to illustrate the invention and are not configured to limit the invention. It will be apparent to one skilled in the art that the present invention may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the invention by showing examples of the invention.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
Example 1
Referring to fig. 1, embodiment 1 of the present invention provides a region of interest locking method, which includes:
s1: acquiring a real-time video stream in an infant care scene, decomposing the video stream into multi-frame target images, presetting an interest area in the target images, and extracting an interest image corresponding to the interest area;
specifically, a real-time video stream in an infant nursing scene is obtained, wherein the video stream refers to a color video shot in the daytime and an infrared video stream image at night, so that a nursing experience of twenty-four hours of infants is realized. And decomposing the video stream into multiple frames of target images, and conveniently setting an interest area in the target images by a user through operating the APP on the mobile terminal, wherein the interest area refers to a polygonal area which is formed by sequentially connecting points of interest selected by the user on the video stream into a closed loop, and extracting the interest image corresponding to the interest area. The user can set different interest areas according to actual nursing demands, nursing of infants can be better achieved, and nursing experience of the user is improved.
S2: screening all points in the target image and the interest image according to a preset rule, performing feature matching on the screened points meeting the requirements, and outputting matching points corresponding to the interest area;
specifically, the motion state of the cradle head on the nursing device is obtained, when the current cradle head motion state is rotation, the motion state is automatically identified as an interest area locking instruction, and the interest area is locked: and screening all points in the target image and the interest image according to a preset rule, performing feature matching on the screened points meeting the requirements, and outputting each matching point corresponding to the interest region. The screening and feature matching process ensures the accuracy of each output matching point, and further effectively completes the locking of the interest area. If the cradle head does not rotate, the next operation is not performed, and unnecessary work consumption is avoided when the cradle head does not rotate.
In one embodiment, referring to fig. 2, the step S2 includes:
s21: converting the interest image into a corresponding gray level image P1, and converting the target image into a corresponding gray level image P2;
s22: performing sharpness analysis on all gray points in the gray images P1 and P2, and outputting first-time qualified point screening and first-time unqualified point screening;
specifically, sharpness is the most important index in image quality, which reflects how much detail is imaged in an image system, and by performing sharpness analysis on all gray points in the gray images P1 and P2, first-time screening fit points and first-time screening non-fit points are output.
In one embodiment, referring to fig. 3, the step S22 includes:
s221: acquiring gray values of all gray points in the gray image P1 and the gray image P2;
s222: acquiring a point A in the gray level image P1 and the gray level image P2, respectively calculating gray level differences between the point A and a plurality of points around the point A, and extracting gray level qualified points in the points around the point A according to the gray level differences;
specifically, any one point a of the grayscale image P1 and the grayscale image P2 is selected, a square region with a side length a equal to five is selected with the point as the center, the square region contains twenty-five points including the point a, the absolute value of the difference between the grayscale values of the point a and the surrounding points, namely fabs (Ai-a), the grayscale difference threshold value is preset to be equal to fifteen, and the corresponding point with the fabs (Ai-a) larger than fifteen is output as the grayscale qualified point. The gray difference between the point A and surrounding points is utilized, the gray difference is reserved as a qualified point, and the gray difference is further screened as a disqualified point, so that the effectiveness and the accuracy of inputting the data source of the feature vector of the subsequent flow are ensured.
S223: counting the number of the gray scale qualified points, presetting a number threshold num, wherein num is a positive integer, judging whether the number of the gray scale qualified points is larger than or equal to the number threshold num,
s224: if the number of the gray scale qualified points is greater than or equal to the number threshold num, the point A is a first screening qualified point;
s225: if the number of the gray scale qualified points is smaller than the number threshold num, the point A is a first screening unqualified point;
specifically, if a gray scale qualified point appears, adding one to the statistical quantity, presetting a statistical quantity threshold num to be equal to twelve, and if the statistical quantity is greater than or equal to twelve, outputting a point A as a first screening qualified point; otherwise, the output point A is the first screening disqualified point.
S226: repeating steps S221 to S225 for all gray points in the gray images P1 and P2, respectively, and outputting all first-time qualified point screening and first-time unqualified point screening.
S23: performing secondary screening on the first-time screening unqualified points according to a preset supplementary screening rule, and outputting second-time screening qualified points;
in one embodiment, referring to fig. 4, the step S23 includes:
s231: obtaining a point B in the first screening unqualified points, repeating the steps S221 to S225 on points in a preset area around the point B, and outputting points meeting the sharpness requirement;
s232: determining the distance between the point meeting the sharpness requirement and the point B, and taking the point meeting the sharpness requirement closest to the point as a second screening qualified point;
specifically, any one point B (Bx, by) of the first filtering non-qualified points is selected, where Bx, by are the abscissa corresponding to point B, the steps S221 to S225 are repeated with the point B as the center and eight points Ne (x (i), y (i)) of the point B neighborhood, the points meeting the sharpness requirement and the points not meeting the sharpness requirement are output, where x (i), y (i) are the abscissa corresponding to the ith point, the value range of i is 1 to 8,i is an integer, and if 8 points of the point C neighborhood have points meeting the sharpness requirement, the points meeting the sharpness analysis are ordered to select the most suitable points, where the ordering rule is as follows: calculating a distance dis= (Bx-x (i)) + (By-y (i)), outputting a corresponding point with the minimum distance dis as a second screening qualified point, and eliminating the point B if eight points in the neighborhood of the point B are points which do not meet the sharpness requirement. Compared with the prior art, the addition of the second screening process avoids the loss of qualified points, ensures the accuracy of the first screening, and improves the accuracy of the finally output points of interest of the user.
S233: and repeating steps S231 to S232 for all points in the first screening non-qualified points respectively, and outputting all second screening qualified points.
S24: taking the first screening non-qualified points and the second screening qualified points as target points, and respectively extracting a first characteristic set M of the target points in the gray level image P1 and a second characteristic set N of the target points in the P2;
in one embodiment, referring to fig. 5, the step S24 includes:
s241: acquiring any point C in the target point;
s242: determining average gray values of all points in a preset square area around the point C, and obtaining a feature vector according to the average gray values and combining the gray values of each point in the square area;
specifically, any one point C of the first screening qualified point and the second screening qualified point is selected, a square area with side length eleven is selected by taking the point C as a center, the square area contains 121 points altogether, and an average gray value avg (C) of all points in the square area is calculated, wherein the calculation formula is as follows: s (C) = (C1-avg (C), C2-avg (C), C3-avg (C) … C121-avg (C)), wherein avg (C) is an average gray value of 121 total points in a square region, C1, C2 … C121 represent gray values of each of all points from 1 to 121 square region, and S (C) is outputted as a feature vector of the target point C, wherein S (C) is a vector of 121 dimensions. The feature vector reflects the overall feature of the square region by utilizing the average gray value of all points in the square region and the gray value of each point, avoids the occurrence of inaccurate feature of a single point, and is convenient for the effectiveness of subsequent feature matching.
S243: steps S241 to S242 are repeated for all target points in the grayscale images P1 and P2, respectively, and the feature vector set obtained from the target point in the grayscale image P1 is used as the first feature set M, and the feature vector set obtained from the target point in the grayscale image P2 is used as the second feature set M.
S25: and performing feature matching on the target point in the gray image P1 and the target point in the gray image P2 according to the first feature set M and the second feature set N, and outputting a matching point corresponding to the region of interest.
In one embodiment, referring to fig. 6, the step S25 includes:
s251: selecting any one feature vector S (A) in the feature set M and any one feature vector S (B) in the feature set N;
s252: calculating a cross-correlation coefficient S (a, B) ncc from the feature vectors S (a) and S (B), wherein-1= < S (a, B) ncc < = 1;
s253: presetting a matching threshold, and outputting a point corresponding to the feature vector S (B) as a matching point when the cross-correlation coefficient S (A, B) ncc is larger than the matching threshold;
specifically, any one of a feature vector S (D) in the feature set M and any one of a feature vector S (E) in the feature set N are selected, where the feature vector of the point D is corresponding to S (D), the feature vector of the point E is corresponding to S (E), the cross-correlation coefficient S (a, B) ncc is calculated according to a cross-correlation coefficient calculation method, the calculation method is that S (D, E) ncc =sum (di×ei)/(SQRT ((SUM (di×di)) is selected, where i ranges from 1 to 121, i is an integer, SUM represents a summation operation, SQRT represents a formulation operation, the closer the value range of S (D, E) ncc is-1 to 1, the more similar the two regions are, the more matching is represented, the matching threshold is 0.85, and if S (D, E) ncc is greater than or equal to 0.85, the point E is output as a matching point.
S254: steps S251 to S252 are repeated for all feature vectors in the feature sets M and N, and all matching points in the feature set N are output.
S3: and obtaining a new region of interest according to each matching point, and realizing the locking of the region of interest.
In one embodiment, referring to fig. 7, the step S3 includes:
s31: acquiring all matching points in the feature set N and inputting the matching points into the real-time video stream;
s32: and drawing all matching points in the feature set N on the next frame of image in the real-time video stream to form a new region of interest, wherein the targets contained in the new region of interest and the preset region of interest are the same.
Specifically, referring to fig. 8, a user presets a region of interest, where the region of interest is formed by connecting 5 points, and coordinates of each point are a (x 1, y 1), B (x 2, y 2), C (x 3, y 3), D (x 4, y 4), and F (x 5, y 5), respectively, and when the camera of the care device rotates, as shown in fig. 9, the A, B, C, D and F five points find respective matching points a '(x 1, y 1), B' (x 2, y 2), C '(x 3, y 3), D' (x 4, y 4), and F '(x 5, y 5) through steps S1 to S2, and at this time, a new target region formed by connecting a' (x 1, y 1), B '(x 2, y 2), C' (x 3, y 3), D '(x 4, y 4), and F' (x 5, y 5) is the same as the original region of interest, that is, so as to complete the region locking. The interest area is conveniently designated by a user through the APP, and comprises an infant active area, a sleeping area and the like, and the effective and accurate locking of the privacy interest area greatly improves the intelligent nursing experience of the infant.
Example 2
Referring to fig. 10, an embodiment of the present invention further provides a region of interest locking device, which is characterized in that the device includes:
the image acquisition module is used for acquiring a real-time video stream in an infant care scene, decomposing the video stream into a plurality of frame target images, setting an interest area in the target images in advance, and extracting an interest image corresponding to the interest area;
the matching point extraction module is used for screening all points in the target image and the interest image according to a preset rule, carrying out feature matching on the screened points meeting the requirements, and outputting each matching point corresponding to the interest area;
and the interest region locking module is used for obtaining a new interest region according to each matching point and realizing the locking of the interest region.
By adopting the region of interest locking device of the embodiment, an image acquisition module is used for acquiring a real-time video stream in an infant care scene, decomposing the video stream into a plurality of frame target images, setting a region of interest in the target images in advance, and extracting an interest image corresponding to the region of interest; the matching point extraction module is used for screening all points in the target image and the interest image according to a preset rule, carrying out feature matching on the screened points meeting the requirements, and outputting each matching point corresponding to the interest area; and the interest region locking module is used for obtaining a new interest region according to each matching point and realizing the locking of the interest region. And in the process, compared with the prior art, the screening process avoids the loss of qualified points, ensures the screening accuracy, ensures the matching accuracy in the characteristic matching process, improves the accuracy of a new interest region which is finally output by the matched points, accurately completes region locking, and further realizes effective nursing of infants.
Example 3
In addition, the region of interest locking method of the embodiment of the present invention described in connection with the drawings may be implemented by an electronic device. Fig. 11 shows a schematic hardware structure of an electronic device according to an embodiment of the present invention.
The electronic device may include a processor and memory storing computer program instructions.
In particular, the processor may comprise a Central Processing Unit (CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured as one or more integrated circuits that implement embodiments of the present invention.
The memory may include mass storage for data or instructions. By way of example, and not limitation, the memory may comprise a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of the foregoing. The memory may include removable or non-removable (or fixed) media, where appropriate. The memory may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory is a non-volatile solid state memory. In a particular embodiment, the memory includes Read Only Memory (ROM). The ROM may be mask programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory, or a combination of two or more of these, where appropriate.
The processor implements any of the region of interest locking methods of the above embodiments by reading and executing computer program instructions stored in memory.
In one example, the electronic device may also include a communication interface and a bus. The processor, the memory, and the communication interface are connected by a bus and complete communication with each other as shown in fig. 11.
The communication interface is mainly used for realizing communication among the modules, the devices, the units and/or the equipment in the embodiment of the invention.
The bus includes hardware, software, or both that couple the components of the device to one another. By way of example, and not limitation, the buses may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a micro channel architecture (MCa) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of the above. The bus may include one or more buses, where appropriate. Although embodiments of the invention have been described and illustrated with respect to a particular bus, the invention contemplates any suitable bus or interconnect.
Example 4
In addition, in combination with the region of interest locking method in the above embodiment, an embodiment of the present invention may be implemented by providing a computer readable storage medium. The computer readable storage medium has stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any of the region of interest locking methods of the above embodiments.
In summary, the embodiment of the invention provides a method, a device, equipment and a storage medium for locking a region of interest.
It should be understood that the invention is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and shown, and those skilled in the art can make various changes, modifications and additions, or change the order between steps, after appreciating the spirit of the present invention.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this disclosure describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, or may be performed in a different order from the order in the embodiments, or several steps may be performed simultaneously.
In the foregoing, only the specific embodiments of the present invention are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present invention is not limited thereto, and any equivalent modifications or substitutions can be easily made by those skilled in the art within the technical scope of the present invention, and they should be included in the scope of the present invention.

Claims (8)

1. A method for locking a region of interest in an infant care scene, the method comprising:
s1: acquiring a real-time video stream in an infant care scene, decomposing the video stream into multi-frame target images, presetting an interest area in the target images, and extracting an interest image corresponding to the interest area, wherein the interest area is a polygonal area in which a user selects interest points on the video stream and is sequentially connected into a closed loop;
s2: acquiring the motion state of a cradle head on nursing equipment, identifying as an interest region locking instruction when the current cradle head motion state is rotation, screening all points in a target image and an interest image according to a preset rule, performing feature matching on the screened points meeting the requirements, and outputting each matching point corresponding to the interest region;
s3: obtaining a new region of interest according to each matching point, and realizing region of interest locking;
wherein, the S2 includes:
s21: converting the interest image into a corresponding gray level image P1, and converting the target image into a corresponding gray level image P2;
s22: performing sharpness analysis on all gray points in the gray images P1 and P2, and outputting first-time qualified point screening and first-time unqualified point screening;
s23: performing secondary screening on the first-time screening unqualified points according to a preset supplementary screening rule, and outputting second-time screening qualified points;
s24: taking the first screening qualified point and the second screening qualified point as target points, and respectively extracting a first characteristic set M of the target points in the gray level image P1 and a second characteristic set N of the target points in the P2;
s25: performing feature matching on the target point in the gray image P1 and the target point in the gray image P2 according to the first feature set M and the second feature set N, and outputting a matching point corresponding to the region of interest;
the S23 includes:
s231: obtaining a point B in the first screening unqualified points, carrying out sharpness analysis on points in a preset area around the point B, and outputting points meeting sharpness requirements;
s232: determining the distance between the point meeting the sharpness requirement and the point B, and taking the point meeting the sharpness requirement closest to the point as a second screening qualified point;
s233: and repeating steps S231 to S232 for all points in the first screening non-qualified points respectively, and outputting all second screening qualified points.
2. The method for locking a region of interest in an infant care scenario according to claim 1, wherein S22 comprises:
s221: acquiring gray values of all gray points in the gray image P1 and the gray image P2;
s222: acquiring a point A in the gray level image P1 and the gray level image P2, respectively calculating gray level differences between the point A and a plurality of points around the point A, and extracting gray level qualified points in the points around the point A according to the gray level differences;
s223: counting the number of the gray scale qualified points, and presetting a number threshold num, wherein num is a positive integer, and judging whether the number of the gray scale qualified points is larger than or equal to the number threshold num;
s224: if the number of the gray scale qualified points is greater than or equal to the number threshold num, the point A is a first screening qualified point;
s225: if the number of the gray scale qualified points is smaller than the number threshold num, the point A is a first screening unqualified point;
s226: repeating steps S221 to S225 for all gray points in the gray images P1 and P2, respectively, and outputting all first-time qualified point screening and first-time unqualified point screening.
3. The method for locking a region of interest in an infant care scenario according to claim 1, wherein S24 comprises:
s241: acquiring any point C in the target point;
s242: determining average gray values of all points in a preset square area around the point C, and obtaining a feature vector according to the average gray values and combining the gray values of each point in the square area;
s243: steps S241 to S242 are repeated for all target points in the grayscale images P1 and P2, respectively, and the feature vector set obtained from the target point in the grayscale image P1 is used as the first feature set M, and the feature vector set obtained from the target point in the grayscale image P2 is used as the second feature set M.
4. The method for locking a region of interest in an infant care scenario according to claim 3, wherein S25 comprises:
s251: selecting any one feature vector S (A) in the feature set M and any one feature vector S (B) in the feature set N;
s252: calculating a cross-correlation coefficient S (a, B) ncc from the feature vectors S (a) and S (B), wherein-1= < S (a, B) ncc < = 1;
s253: presetting a matching threshold, and outputting a point corresponding to the feature vector S (B) as a matching point when the cross-correlation coefficient S (A, B) ncc is larger than the matching threshold;
s254: steps S251 to S252 are repeated for all feature vectors in the feature sets M and N, and all matching points in the feature set N are output.
5. The method for locking a region of interest in an infant care scenario according to any one of claims 1 to 4, wherein S3 comprises:
s31: acquiring all matching points in the feature set N and inputting the matching points into the real-time video stream;
s32: and drawing all matching points in the feature set N on the next frame of image in the real-time video stream to form a new region of interest, wherein the targets contained in the new region of interest and the preset region of interest are the same.
6. A region of interest locking device in an infant care setting, the device comprising:
the image acquisition module is used for acquiring a real-time video stream in an infant care scene, decomposing the video stream into multi-frame target images, presetting an interest area in the target images, and extracting an interest image corresponding to the interest area, wherein the interest area is a polygonal area which is formed by sequentially connecting selected interest points of a user on the video stream into a closed loop;
the matching point extraction module is used for acquiring the motion state of the cradle head on the nursing equipment, identifying an interesting region locking instruction when the current cradle head motion state is rotation, screening all points in the target image and the interesting image according to a preset rule, performing feature matching on the screened points meeting the requirements, and outputting each matching point corresponding to the interesting region;
the interest region locking module is used for obtaining a new interest region according to each matching point and realizing interest region locking;
screening all points in the target image and the interest image according to a preset rule, performing feature matching on the screened points meeting the requirements, and outputting each matching point corresponding to the interest region, wherein the step of outputting the matching points comprises the following steps:
converting the interest image into a corresponding gray level image P1, and converting the target image into a corresponding gray level image P2;
performing sharpness analysis on all gray points in the gray images P1 and P2, and outputting first-time qualified point screening and first-time unqualified point screening;
performing secondary screening on the first-time screening unqualified points according to a preset supplementary screening rule, and outputting second-time screening qualified points;
taking the first screening qualified point and the second screening qualified point as target points, and respectively extracting a first characteristic set M of the target points in the gray level image P1 and a second characteristic set N of the target points in the P2;
performing feature matching on the target point in the gray image P1 and the target point in the gray image P2 according to the first feature set M and the second feature set N, and outputting a matching point corresponding to the region of interest;
and performing secondary screening on the first-time screening unqualified points according to a preset supplementary screening rule, and outputting second-time screening qualified points comprises the following steps:
obtaining a point B in the first screening unqualified points, carrying out sharpness analysis on points in a preset area around the point B, and outputting points meeting sharpness requirements;
determining the distance between the point meeting the sharpness requirement and the point B, and taking the point meeting the sharpness requirement closest to the point as a second screening qualified point;
and repeatedly acquiring points B in the first-time screening non-qualified points respectively for all the points in the first-time screening non-qualified points, carrying out sharpness analysis on points in a preset area around the points B, outputting points meeting sharpness requirements, determining the distance between the points meeting sharpness requirements and the points B, taking the point closest to the points meeting sharpness requirements as second-time screening qualified points, and outputting all the second-time screening qualified points.
7. An electronic device, comprising: at least one processor, at least one memory, and computer program instructions stored in the memory, which when executed by the processor, implement the method of any one of claims 1-5.
8. A storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of claims 1-5.
CN202211717671.0A 2022-12-29 2022-12-29 Method, device, equipment and medium for locking region of interest in infant care scene Active CN115861603B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211717671.0A CN115861603B (en) 2022-12-29 2022-12-29 Method, device, equipment and medium for locking region of interest in infant care scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211717671.0A CN115861603B (en) 2022-12-29 2022-12-29 Method, device, equipment and medium for locking region of interest in infant care scene

Publications (2)

Publication Number Publication Date
CN115861603A CN115861603A (en) 2023-03-28
CN115861603B true CN115861603B (en) 2023-09-26

Family

ID=85656201

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211717671.0A Active CN115861603B (en) 2022-12-29 2022-12-29 Method, device, equipment and medium for locking region of interest in infant care scene

Country Status (1)

Country Link
CN (1) CN115861603B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101165359B1 (en) * 2011-02-21 2012-07-12 (주)엔써즈 Apparatus and method for analyzing relation with image and image or video
CN102999939A (en) * 2012-09-21 2013-03-27 魏益群 Coordinate acquisition device, real-time three-dimensional reconstruction system, real-time three-dimensional reconstruction method and three-dimensional interactive equipment
CN111383236A (en) * 2020-04-24 2020-07-07 中国人民解放军总医院 Method, apparatus and computer-readable storage medium for labeling regions of interest
CN111444948A (en) * 2020-03-21 2020-07-24 哈尔滨工程大学 Image feature extraction and matching method
CN111626263A (en) * 2020-06-05 2020-09-04 北京百度网讯科技有限公司 Video interesting area detection method, device, equipment and medium
WO2020218024A1 (en) * 2019-04-24 2020-10-29 日本電信電話株式会社 Panoramic video image synthesis device, panoramic video image synthesis method, and panoramic video image synthesis program
CN113378886A (en) * 2021-05-14 2021-09-10 珞石(山东)智能科技有限公司 Method for automatically training shape matching model
WO2022156525A1 (en) * 2021-01-25 2022-07-28 北京沃东天骏信息技术有限公司 Object matching method and apparatus, and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7049983B2 (en) * 2018-12-26 2022-04-07 株式会社日立製作所 Object recognition device and object recognition method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101165359B1 (en) * 2011-02-21 2012-07-12 (주)엔써즈 Apparatus and method for analyzing relation with image and image or video
CN102999939A (en) * 2012-09-21 2013-03-27 魏益群 Coordinate acquisition device, real-time three-dimensional reconstruction system, real-time three-dimensional reconstruction method and three-dimensional interactive equipment
WO2020218024A1 (en) * 2019-04-24 2020-10-29 日本電信電話株式会社 Panoramic video image synthesis device, panoramic video image synthesis method, and panoramic video image synthesis program
CN111444948A (en) * 2020-03-21 2020-07-24 哈尔滨工程大学 Image feature extraction and matching method
CN111383236A (en) * 2020-04-24 2020-07-07 中国人民解放军总医院 Method, apparatus and computer-readable storage medium for labeling regions of interest
CN111626263A (en) * 2020-06-05 2020-09-04 北京百度网讯科技有限公司 Video interesting area detection method, device, equipment and medium
WO2022156525A1 (en) * 2021-01-25 2022-07-28 北京沃东天骏信息技术有限公司 Object matching method and apparatus, and device
CN113378886A (en) * 2021-05-14 2021-09-10 珞石(山东)智能科技有限公司 Method for automatically training shape matching model

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A robust deformed image matching method for multi-source image matching;Guili Xu等;Infrared Physics & Technology;第115卷;1-7 *
基于反射镜拼接的TDICCD图像配准与拼接技术研究;苏婷;中国优秀硕士学位论文全文数据库信息科技辑(第05期);45-60 *
多模态遥感图像匹配方法综述;眭海刚等;测绘学报;第51卷(第09期);1848-1861 *
车辆检测中一种兴趣区域提取方法;支俊;;计算机工程与设计(第12期);3013-3015 *

Also Published As

Publication number Publication date
CN115861603A (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN111144337B (en) Fire detection method and device and terminal equipment
CN111080526A (en) Method, device, equipment and medium for measuring and calculating farmland area of aerial image
CN107908998B (en) Two-dimensional code decoding method and device, terminal equipment and computer readable storage medium
CN106296576A (en) Image processing method and image processing apparatus
CN113038272B (en) Method, device and equipment for automatically editing baby video and storage medium
CN107146217A (en) A kind of image detecting method and device
CN109587392B (en) Method and device for adjusting monitoring equipment, storage medium and electronic device
CN115861603B (en) Method, device, equipment and medium for locking region of interest in infant care scene
CN108805838B (en) Image processing method, mobile terminal and computer readable storage medium
CN113780492A (en) Two-dimensional code binarization method, device and equipment and readable storage medium
CN112330618B (en) Image offset detection method, device and storage medium
CN112818165A (en) Data processing method, device, equipment and storage medium
CN112508065B (en) Robot and positioning method and device thereof
CN113822818B (en) Speckle extraction method, device, electronic device, and storage medium
CN116246308A (en) Multi-target tracking early warning method and device based on visual recognition and terminal equipment
CN116249015A (en) Camera shielding detection method and device, camera equipment and storage medium
CN113469130A (en) Shielded target detection method and device, storage medium and electronic device
US11195288B2 (en) Method for processing a light field video based on the use of a super-rays representation
CN100469103C (en) Image noise filtering system and method
CN110475044A (en) Image transfer method and device, electronic equipment, computer readable storage medium
CN113689411B (en) Counting method, device and storage medium based on visual recognition
CN115601793B (en) Human body bone point detection method and device, electronic equipment and storage medium
CN114646320B (en) Path guiding method and device, electronic equipment and readable storage medium
CN111861948B (en) Image processing method, device, equipment and computer storage medium
CN114596599A (en) Face recognition living body detection method, device, equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant