CN114598811A - Panoramic video view quality evaluation method, electronic device and computer-readable storage medium - Google Patents

Panoramic video view quality evaluation method, electronic device and computer-readable storage medium Download PDF

Info

Publication number
CN114598811A
CN114598811A CN202210055008.2A CN202210055008A CN114598811A CN 114598811 A CN114598811 A CN 114598811A CN 202210055008 A CN202210055008 A CN 202210055008A CN 114598811 A CN114598811 A CN 114598811A
Authority
CN
China
Prior art keywords
objects
bounding boxes
bounding box
panoramic video
view angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210055008.2A
Other languages
Chinese (zh)
Other versions
CN114598811B (en
Inventor
陈勃霖
龙良曲
姜文杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Insta360 Innovation Technology Co Ltd
Original Assignee
Insta360 Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Insta360 Innovation Technology Co Ltd filed Critical Insta360 Innovation Technology Co Ltd
Priority to CN202210055008.2A priority Critical patent/CN114598811B/en
Publication of CN114598811A publication Critical patent/CN114598811A/en
Application granted granted Critical
Publication of CN114598811B publication Critical patent/CN114598811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2622Signal amplitude transition in the zone between image portions, e.g. soft edges
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention discloses a method for evaluating the visual angle quality of a panoramic video frame, which comprises the following steps: acquiring a bounding box of each shooting object of a panoramic video frame; combining the bounding boxes corresponding to the shot objects meeting the association degree condition into candidate bounding boxes; and performing view angle quality evaluation on the candidate bounding boxes. Compared with the prior art, the technical scheme of the invention combines the bounding boxes of adjacent shot objects meeting the preset conditions into the candidate bounding box, and then carries out view angle quality evaluation on the candidate bounding box, so that the evaluation accuracy of the view angle quality of the panoramic video frame is improved, and the method can be used for assisting the view angle selection of the panoramic video. In addition, the invention also discloses an electronic device and a computer readable storage medium for realizing the method.

Description

Panoramic video view quality evaluation method, electronic device and computer-readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method for evaluating quality of a view angle of a panoramic video, an electronic device, and a computer-readable storage medium.
Background
The panoramic camera can obtain all visual information of a 360-degree spherical surface after taking a picture or when taking a video. For a taken panoramic photograph (video frame), there are often a plurality of video frames of a photographic subject (such as a human or an animal), and a photographer tends to pay more attention to a photographic subject with a relatively special posture.
The view angle selection area (propofol) of the current panoramic video frame is usually represented by a bounding box, but the view angle selection area (propofol) of the current panoramic video frame is often divided according to a single object, and for some adjacent multiple objects, if the multiple objects are divided into multiple view angle selection areas (propofol) for view angle quality evaluation, the view angle quality score of each view angle selection area (propofol) may be low or the vision of a single shooting object may not be complete, and the view angle quality of the multiple objects cannot be accurately evaluated.
Therefore, it is necessary to provide a view angle quality evaluation scheme for panoramic video, which better evaluates the view angle quality of each object in the panoramic video frame.
Disclosure of Invention
The invention aims to provide a method for evaluating the view angle quality of a panoramic video, an electronic device and a computer readable storage medium, so as to evaluate the view angle quality of a panoramic video frame more accurately.
In a first aspect, an embodiment of the present invention discloses a method for evaluating quality of a view angle of a panoramic video frame, including: acquiring a bounding box of each shooting object of a panoramic video frame; combining the bounding boxes corresponding to the shot objects meeting the association degree condition into candidate bounding boxes; and performing view angle quality evaluation on the candidate bounding boxes.
In a specific aspect of this embodiment, the association degree condition is: the types of the shot objects belong to combinable types, and the distance between the corresponding boundary frames of the shot objects is smaller than a preset value.
Further, the relevancy condition further includes: the ratio of the areas of the bounding boxes corresponding to the shooting objects is within a preset range.
In a specific aspect of this embodiment, the merging the bounding boxes corresponding to the objects that satisfy the association degree condition into the candidate bounding box includes: determining that the categories of any two shot objects can be combined; determining that the distance between the corresponding boundary frames of the two shot objects is smaller than a preset value; determining that the ratio of the areas of the boundary frames corresponding to the two shot objects is within a preset range; the two bounding boxes are merged into a candidate bounding box.
In a specific aspect of this embodiment, the association degree condition is: the types of the shot objects belong to combinable types, and the overlapping area between the corresponding boundary frames of the shot objects is larger than a preset value.
Further, the relevancy condition further includes: the ratio of the areas of the bounding boxes corresponding to the shooting objects is within a preset range.
In another specific aspect of this embodiment, the merging the bounding boxes corresponding to the objects that satisfy the association degree condition into the candidate bounding box includes: determining that the categories of any two shot objects can be combined; determining whether the overlapping area between the corresponding bounding boxes of the two shot objects is larger than a preset value; determining that the ratio of the areas of the boundary frames corresponding to the two shot objects is within a preset range; the two bounding boxes are merged into a candidate bounding box.
Further, the performing view quality evaluation on the candidate bounding boxes in this embodiment includes: respectively acquiring the view angle quality dimension score, the area dimension score and the position dimension score of each shooting object of the candidate bounding box; and fusing the scores of all the dimensions to obtain a comprehensive score of the candidate bounding box.
In a second aspect, in another embodiment of the present invention, an electronic device is disclosed, which includes a memory, a processor, and a computer program stored on the memory, wherein the processor executes the computer program to implement any of the steps of the above-mentioned viewing angle quality assessment method.
In a third aspect, a computer-readable storage medium is disclosed in a further embodiment of the present invention, and the computer-readable storage medium stores thereon a computer program/instruction, which when executed by a processor, implements any of the above-mentioned steps of the method for estimating quality of a viewing angle.
Compared with the prior art, the technical scheme of the invention combines the bounding boxes of adjacent shot objects meeting the preset conditions into the candidate bounding box, and then carries out view angle quality evaluation on the candidate bounding box, so that the evaluation accuracy of the view angle quality of the panoramic video frame is improved, and the method can be used for assisting the view angle selection of the panoramic video.
Drawings
Fig. 1 is a flowchart of a method for evaluating quality of a view angle of a panoramic video in embodiment 1 of the present invention.
Fig. 2 is a flowchart of one implementation of step S2 in fig. 1.
Fig. 3 is a flowchart of another implementation of step S2 in fig. 1.
Fig. 4 is a sub-flowchart of step S3 in fig. 1.
Fig. 5 is a schematic diagram of a bounding box of a photographic subject in a panoramic video frame.
Fig. 6 is a schematic diagram of the bounding boxes of adjacent photographic subjects merging into a candidate bounding box.
Fig. 7 is a block diagram of an electronic device in embodiment 2 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Example 1
As shown in fig. 1, the present embodiment discloses a method for evaluating quality of a view angle of a panoramic video, which includes the following steps.
S1: and acquiring a bounding box of each shooting object of the panoramic video frame.
In this embodiment, the bounding box of each object is a visual selection area (propofol), which can be directly obtained by a detector, as shown in the schematic diagram of the bounding box of the object in the panoramic video frame shown in fig. 5, each rectangular box is a corresponding bounding box. Because the panoramic video frame contains a great number of shot objects, a great number of bounding boxes exist, and in order to obtain the bounding boxes of the shot objects which are more in line with the aesthetic sense of users, the bounding boxes of the shot objects detected by the detector can be filtered, for example, the bounding boxes at the upper and lower edges of the panoramic video frame and the bounding boxes with an excessively small area are filtered.
S2: and combining the bounding boxes corresponding to the shooting objects meeting the association degree condition into candidate bounding boxes.
In one implementation manner of this step, the association degree condition includes: a1, the category of each shooting object belongs to the category which can be combined; a2, the distance between the corresponding boundary frames of the shooting objects is less than the preset value. In general, it is only necessary to satisfy two association degree conditions of a1 and a 2. In some extreme cases, if the areas of the bounding boxes of the two shot objects are combined when the difference is too large, some shot objects which are small in area and worth paying attention to can be lost for the user, and in order to avoid the above situations, a relevance condition is added in the optimization scheme of the specific scheme: a3, the ratio of the area of the bounding box corresponding to each shooting object is in a preset range.
The following description will be made in detail by taking an example in which three relevance degree conditions of a1, a2, and A3 are simultaneously satisfied, and as shown in fig. 2, the following steps are included.
S21: the determination of the categories of any two photographic subjects can be combined.
Firstly, acquiring the category of each shooting object, for example, identifying the category of each shooting object by constructing a category identification model; then, it is determined whether the categories of any two subjects can be combined, if yes, the process proceeds to step S22, and if no, the bounding boxes corresponding to the two subjects are not combined, and then the two subjects are reselected to repeat the above process. It should be noted that the type of the photographic subject may be determined by a preset merging rule, for example, two photographic subjects belong to the same type, or two photographic subjects respectively belong to two types of objects with a higher degree of preset association, such as a person and a skateboard, a person and a bicycle, a person and a crutch, and the like.
S22: and determining that the distance between the corresponding boundary frames of the two shot objects is smaller than a preset value.
Specifically, the distance between the two bounding boxes corresponding to the two photographic subjects is calculated, for example, by calculating the distance between the geometric centers of the two bounding boxes, and then it is determined whether the distance is smaller than a preset value, if so, the process proceeds to step S23, and if not, the two photographic subjects are reselected and then the process returns to step S21.
S23: and determining that the ratio of the areas of the boundary frames corresponding to the two shooting objects is within a preset range.
First, the area ratio of the bounding boxes corresponding to the two shot objects is calculated, then whether the area ratio of the two bounding boxes is within a preset range is judged, if yes, the step S23 is carried out, and if not, the two shot objects are reselected and then the step S21 is returned. For example, when a person photograph is taken before a large portrait, it is not preferable to combine the bounding box of the person and the bounding box of the large portrait, although both of them satisfy the conditions of a1 and a 2. As shown in fig. 6, the character type and the skateboard type in the panoramic video frame are in accordance with the mergeable type, the distance between the bounding box of the character and the bounding box of the skateboard is also smaller than the preset value, and the area ratio of the bounding box of the character to the bounding box of the skateboard is also within the preset range, so that the merging condition is satisfied, and the bounding box of the character and the bounding box of the skateboard are merged.
S24: the two bounding boxes are merged into a candidate bounding box.
Specifically, coordinate values of vertexes of bounding boxes of the shot objects under the same plane coordinate system are respectively obtained; forming a candidate bounding box by taking (Xmin, Ymin), (Xmin, Ymax), (Xmax, Ymax) and (Xmax, Ymin) as vertexes; wherein Xmin is a minimum abscissa value of each bounding box, Xmax is a maximum abscissa value of each bounding box, Ymin is a minimum ordinate value of each bounding box, and Ymax is a maximum ordinate value of each bounding box.
In another implementation manner of step S2, the association degree condition includes: b1, the categories of the shooting objects belong to combinable categories; b2, the overlapping area between the corresponding bounding boxes of the shooting objects is larger than a preset value. In general, two association degree conditions of B1 and B2 need only be satisfied. In some extreme cases, if the areas of the bounding boxes of the two shot objects are combined when the difference is too large, some shot objects which are small in area and worth paying attention to can be lost for the user, and in order to avoid the above situations, a relevance condition is added in the optimization scheme of the specific scheme: b3, the ratio of the areas of the bounding boxes corresponding to the shooting objects is in a preset range.
The following description will be made in detail by taking an example in which three relevance degree conditions of B1, B2, and B3 are simultaneously satisfied, and as shown in fig. 3, the following steps are included.
S21': the determination of the categories of any two photographic subjects can be combined.
Firstly, acquiring the category of each shooting object, for example, identifying the category of each shooting object by constructing a category identification model; then, it is determined whether the categories of any two photographic subjects can be combined, if yes, the process proceeds to step S22', if no, the bounding boxes corresponding to the two photographic subjects are not combined, and then the two photographic subjects are reselected to repeat the above process. It should be noted that the category of the photographic subject may be determined by a preset merge rule, for example, two photographic subjects belong to the same category, or two photographic subjects belong to two categories of objects with a higher preset degree of association, such as a person and a skateboard, a person and a bicycle, a person and a crutch, and the like.
S22': and determining whether the overlapping area between the corresponding bounding boxes of the two shot objects is larger than a preset value.
Specifically, the overlapping area between the two bounding boxes corresponding to the two photographic subjects is calculated, and then it is determined whether the value of the overlapping area is smaller than a preset value, if so, the process proceeds to step S23 ', and if not, the two photographic subjects are reselected and then the process returns to step S21'.
S23': and determining that the ratio of the areas of the boundary frames corresponding to the two shooting objects is within a preset range.
First, the area ratio of the bounding boxes corresponding to the two shot objects is calculated, then whether the area ratio of the two bounding boxes is within a preset range is judged, if yes, the step S23 is carried out, and if not, the two shot objects are reselected and then the step S21 is returned. For example, when a person photograph is taken before a large portrait, it is not preferable to combine the bounding box of the person and the bounding box of the large portrait, although both of them satisfy the conditions of B1 and B2.
S24': the two bounding boxes are merged into a candidate bounding box.
Specifically, coordinate values of vertexes of bounding boxes of the shot objects under the same plane coordinate system are respectively obtained; forming a candidate bounding box by taking (Xmin, Ymin), (Xmin, Ymax), (Xmax, Ymax) and (Xmax, Ymin) as vertexes; wherein Xmin is a minimum abscissa value of each bounding box, Xmax is a maximum abscissa value of each bounding box, Ymin is a minimum ordinate value of each bounding box, and Ymax is a maximum ordinate value of each bounding box.
S3: and performing view angle quality evaluation on the candidate bounding boxes.
As shown in fig. 4, in the present embodiment, the performing view quality evaluation on the candidate bounding boxes includes the following steps.
S31: and respectively acquiring the view angle quality dimension score, the area dimension score and the position dimension score of each shooting object of the candidate bounding box.
In this embodiment, the quality dimension score of the view angle of each photographic subject is the quality score of the view angle of the bounding box of each photographic subject, and is calculated after weighted assignment is performed according to the importance degree of each photographic subject, wherein the importance degree includes the category of the photographic subject, and the importance degree of people is greater than that of animals or plants; determining the area dimension score of the candidate bounding box according to the ratio of the area of the candidate bounding box to the area of the panoramic video frame; the position dimension score of the candidate bounding box is determined according to the position of the candidate bounding box in the panoramic video frame.
S32: and fusing the scores of all the dimensions to obtain a comprehensive score of the candidate bounding box.
In this embodiment, the initial view quality scores of the bounding boxes of the respective objects within the candidate bounding boxes may be directly obtained, and then different weight values may be assigned in a weighting manner to reflect the relative importance of the respective evaluation dimensions, where the weight values of the respective dimensions may be obtained by performing a grid search on the labeled data set. For example, for candidate bounding boxes formed by merging bounding boxes of various shot objects, distortion is often severe near upper and lower edges of a panoramic video frame, and the quality of a viewing angle is poor, so that a position dimension score has a large influence on a composite score, and in this embodiment, the composite score of the candidate bounding boxes is: (view quality dimension score + area dimension score) x position dimension score. Through the fusion in the mode, the influences of different dimensions can be well distinguished, and a relatively reasonable comprehensive score is obtained.
Example 2
As shown in fig. 7, an electronic device, such as a panoramic camera, disclosed in the embodiment of the present invention includes a camera, a memory, a processor, and a computer program stored in the memory, where the processor executes the computer program to implement the steps of the method for estimating the quality of the view angle of a panoramic video in embodiment 1.
Specifically, the two cameras comprise two fisheye lenses which are respectively arranged on two opposite surfaces of the panoramic camera and have overlapped view fields so as to cover objects within 360 degrees around the panoramic camera.
Example 3
An embodiment of the present invention provides a computer-readable storage medium, on which a computer program/instruction is stored, which, when executed by a processor, implements the steps of the method for evaluating the quality of a view angle of a panoramic video in embodiment 1.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing associated hardware, and the storage medium may be a computer-readable storage medium, such as a ferroelectric Memory (FRAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a flash Memory, a magnetic surface Memory, an optical disc, or a Compact disc Read Only Memory (CD-ROM), etc.; or may be various devices including one or any combination of the above memories.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. A method for evaluating the view angle quality of a panoramic video frame is characterized by comprising the following steps:
acquiring a bounding box of each shooting object of a panoramic video frame;
combining the bounding boxes corresponding to the shot objects meeting the association degree condition into candidate bounding boxes;
and performing view angle quality evaluation on the candidate bounding boxes.
2. The method of claim 1, wherein the relevance condition is: the types of the shot objects belong to combinable types, and the distance between the corresponding boundary frames of the shot objects is smaller than a preset value.
3. The method of claim 2, wherein the relevancy condition further comprises: the ratio of the areas of the bounding boxes corresponding to the shooting objects is within a preset range.
4. The method for evaluating the view angle quality of the panoramic video according to claim 3, wherein the merging the bounding boxes corresponding to the objects satisfying the association degree condition into the candidate bounding box comprises:
determining that the categories of any two shot objects can be combined;
determining that the distance between the corresponding boundary frames of the two shot objects is smaller than a preset value;
determining that the ratio of the areas of the boundary frames corresponding to the two shot objects is within a preset range;
the two bounding boxes are merged into a candidate bounding box.
5. The method of claim 1, wherein the relevance condition is: the types of the shot objects belong to combinable types, and the overlapping area between the corresponding boundary frames of the shot objects is larger than a preset value.
6. The method of claim 5, wherein the relevancy condition further comprises: the ratio of the areas of the bounding boxes corresponding to the shooting objects is within a preset range.
7. The method for evaluating the view angle quality of the panoramic video according to claim 6, wherein the merging the bounding boxes corresponding to the objects satisfying the association degree condition into the candidate bounding box comprises:
determining that the categories of any two shooting objects can be combined;
determining whether the overlapping area between the corresponding bounding boxes of the two shot objects is larger than a preset value;
determining that the ratio of the areas of the boundary frames corresponding to the two shot objects is within a preset range;
the two bounding boxes are merged into a candidate bounding box.
8. The method of claim 1, wherein the performing view quality estimation on the candidate bounding box comprises:
respectively acquiring the view angle quality dimension score, the area dimension score and the position dimension score of each shooting object of the candidate bounding box;
and fusing the scores of all the dimensions to obtain a comprehensive score of the candidate bounding box.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory, characterized in that the processor executes the computer program to implement the steps of the method of any of claims 1 to 8.
10. A computer-readable storage medium, having stored thereon a computer program/instructions, for implementing the steps of the method of any one of claims 1 to 8 when executed by a processor.
CN202210055008.2A 2022-01-18 2022-01-18 Panoramic video view quality assessment method, electronic device and computer readable storage medium Active CN114598811B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210055008.2A CN114598811B (en) 2022-01-18 2022-01-18 Panoramic video view quality assessment method, electronic device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210055008.2A CN114598811B (en) 2022-01-18 2022-01-18 Panoramic video view quality assessment method, electronic device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN114598811A true CN114598811A (en) 2022-06-07
CN114598811B CN114598811B (en) 2024-06-21

Family

ID=81805354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210055008.2A Active CN114598811B (en) 2022-01-18 2022-01-18 Panoramic video view quality assessment method, electronic device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114598811B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180047193A1 (en) * 2016-08-15 2018-02-15 Qualcomm Incorporated Adaptive bounding box merge method in blob analysis for video analytics
CN108668086A (en) * 2018-08-16 2018-10-16 Oppo广东移动通信有限公司 Atomatic focusing method, device, storage medium and terminal
CN111091022A (en) * 2018-10-23 2020-05-01 宏碁股份有限公司 Machine vision efficiency evaluation method and system
CN111182295A (en) * 2020-01-06 2020-05-19 腾讯科技(深圳)有限公司 Video data processing method, device, equipment and readable storage medium
CN111738262A (en) * 2020-08-21 2020-10-02 北京易真学思教育科技有限公司 Target detection model training method, target detection model training device, target detection model detection device, target detection equipment and storage medium
CN111935479A (en) * 2020-07-30 2020-11-13 浙江大华技术股份有限公司 Target image determination method and device, computer equipment and storage medium
CN113643229A (en) * 2021-06-18 2021-11-12 影石创新科技股份有限公司 Image composition quality evaluation method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180047193A1 (en) * 2016-08-15 2018-02-15 Qualcomm Incorporated Adaptive bounding box merge method in blob analysis for video analytics
CN108668086A (en) * 2018-08-16 2018-10-16 Oppo广东移动通信有限公司 Atomatic focusing method, device, storage medium and terminal
CN111091022A (en) * 2018-10-23 2020-05-01 宏碁股份有限公司 Machine vision efficiency evaluation method and system
CN111182295A (en) * 2020-01-06 2020-05-19 腾讯科技(深圳)有限公司 Video data processing method, device, equipment and readable storage medium
CN111935479A (en) * 2020-07-30 2020-11-13 浙江大华技术股份有限公司 Target image determination method and device, computer equipment and storage medium
CN111738262A (en) * 2020-08-21 2020-10-02 北京易真学思教育科技有限公司 Target detection model training method, target detection model training device, target detection model detection device, target detection equipment and storage medium
CN113643229A (en) * 2021-06-18 2021-11-12 影石创新科技股份有限公司 Image composition quality evaluation method and device

Also Published As

Publication number Publication date
CN114598811B (en) 2024-06-21

Similar Documents

Publication Publication Date Title
US8345921B1 (en) Object detection with false positive filtering
CN111625091B (en) Label overlapping method and device based on AR glasses
US20030044073A1 (en) Image recognition/reproduction method and apparatus
WO2021081037A1 (en) Method for wall line determination, method, apparatus, and device for spatial modeling
Jin et al. Perspective fields for single image camera calibration
CN113793382A (en) Video image splicing seam searching method and video image splicing method and device
CN106204554A (en) Depth of view information acquisition methods based on multiple focussing image, system and camera terminal
US20230040550A1 (en) Method, apparatus, system, and storage medium for 3d reconstruction
CN113630549A (en) Zoom control method, device, electronic equipment and computer-readable storage medium
CN112101195A (en) Crowd density estimation method and device, computer equipment and storage medium
CN113570530A (en) Image fusion method and device, computer readable storage medium and electronic equipment
CN111105351B (en) Video sequence image splicing method and device
CN116051736A (en) Three-dimensional reconstruction method, device, edge equipment and storage medium
CN118429524A (en) Binocular stereoscopic vision-based vehicle running environment modeling method and system
CN113298867A (en) Accurate positioning method and device for ground object target position based on line matching and storage medium
CN113344789A (en) Image splicing method and device, electronic equipment and computer readable storage medium
CN114598811A (en) Panoramic video view quality evaluation method, electronic device and computer-readable storage medium
CN115222621A (en) Image correction method, electronic device, storage medium, and computer program product
CN113592777B (en) Image fusion method, device and electronic system for double-shot photographing
CN113297344B (en) Three-dimensional remote sensing image-based ground linear matching method and device and ground object target position positioning method
CN116051876A (en) Camera array target recognition method and system of three-dimensional digital model
CN116012609A (en) Multi-target tracking method, device, electronic equipment and medium for looking around fish eyes
CN112818743B (en) Image recognition method and device, electronic equipment and computer storage medium
Shi et al. Spatial calibration method for master-slave camera based on panoramic image mosaic
CN112651330B (en) Target object behavior detection method and device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant