CN113723539A - Test question information acquisition method and device - Google Patents

Test question information acquisition method and device Download PDF

Info

Publication number
CN113723539A
CN113723539A CN202111026973.9A CN202111026973A CN113723539A CN 113723539 A CN113723539 A CN 113723539A CN 202111026973 A CN202111026973 A CN 202111026973A CN 113723539 A CN113723539 A CN 113723539A
Authority
CN
China
Prior art keywords
image
reference image
matching
saturation
test question
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111026973.9A
Other languages
Chinese (zh)
Inventor
陈天
梁桂浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yundie Zhixue Technology Co ltd
Original Assignee
Beijing Yundie Zhixue Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yundie Zhixue Technology Co ltd filed Critical Beijing Yundie Zhixue Technology Co ltd
Priority to CN202111026973.9A priority Critical patent/CN113723539A/en
Publication of CN113723539A publication Critical patent/CN113723539A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides a method and a device for collecting test question information, wherein the method comprises the following steps: acquiring a first image sequence of a first test question; processing the first image sequence to generate a first High Dynamic Range (HDR) image; acquiring image features of the first HDR image; and matching the image characteristics with a preset image characteristic model, and setting the first image in a first question bank when the matching is successful. Therefore, the definition of the collected test questions is ensured.

Description

Test question information acquisition method and device
Technical Field
The invention relates to the technical field of image processing, in particular to a test question information acquisition method and device.
Background
The question bank data is an important teaching resource, in daily learning, in order to improve the learning increasing rate, learning knowledge is generally consolidated in a way of making test questions, and students often improve and stabilize the proficiency of the existing knowledge points through a large number of exercise questions.
Therefore, how to collect a sufficient number of test questions and ensure the clarity of the collected test questions becomes an urgent problem to be solved.
Disclosure of Invention
The embodiment of the invention aims to provide a method and a device for acquiring test question information, so as to solve the problem that the existing acquired test questions are unclear.
In order to solve the above problem, in a first aspect, the present invention provides a method for collecting test question information, where the method includes:
acquiring a first image sequence of a first test question;
processing the first image sequence to generate a first High Dynamic Range (HDR) image;
acquiring image features of the first HDR image;
and matching the image characteristics with a preset image characteristic model, and setting the first image in a first question bank when the matching is successful.
Preferably, when the matching is successful, before the first image is arranged in the first question bank, the method further comprises:
comparing the matching result with a first threshold value;
when the matching result is larger than a first threshold value, the matching is successful;
and when the matching result is not larger than the first threshold value, the matching is failed.
Preferably, the first image sequence comprises: a reference picture, a first non-reference picture, and a second non-reference picture; the exposure duration of the first test question of the first non-reference image, the reference image and the second non-reference image is sequentially increased.
Preferably, the processing the first image sequence to generate a first HDR image comprises:
acquiring a first motion region and a second motion region of an image sequence; the first motion area is an area where the first non-reference image has a gray value difference relative to the reference image, and the second motion area is an area where the second non-reference image has a gray value difference relative to the reference image;
determining a target motion area according to a comparison result between the first proportion and a second threshold value and a comparison result between the second proportion and the second threshold value; wherein the first proportion is a proportion occupied by the first motion region in the first non-reference image, the second proportion is a proportion occupied by the second motion region in the second non-reference image, and the target motion region includes at least one connected region;
obtaining a first HDR image according to the first weight value and the second weight value; the first weighted value is the sum of the product of the saturation, the contrast and the exposure degree of each pixel point in the first non-reference image and the product of the saturation, the contrast and the exposure degree of each pixel point in the second non-reference image; or, the first weight value is a product of saturation, contrast and exposure of each pixel point in the first non-reference image; or, the first weight value is a product of saturation, contrast and exposure of each pixel point in the second non-reference image; the second weight value is the product of the saturation, the contrast and the exposure degree of each pixel point in the reference image.
In a second aspect, the present invention provides a test question information collecting device, including:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a first image sequence of a first test question;
a processing unit for processing the first sequence of images to generate a first high dynamic range, HDR, image;
the acquiring unit is further used for acquiring image characteristics of the first HDR image;
and the matching unit is used for matching the image characteristics with a preset image characteristic model, and when the matching is successful, the first image is arranged in a first question bank.
Preferably, the apparatus further comprises: a comparison unit;
the comparison unit is used for comparing the matching result with a first threshold value;
when the matching result is larger than a first threshold value, the matching is successful;
and when the matching result is not larger than the first threshold value, the matching is failed.
Preferably, the first image sequence comprises: a reference picture, a first non-reference picture, and a second non-reference picture; the exposure duration of the first test question of the first non-reference image, the reference image and the second non-reference image is sequentially increased.
Preferably, the acquiring unit is further configured to acquire a first motion region and a second motion region of the image sequence; the first motion area is an area where the first non-reference image has a gray value difference relative to the reference image, and the second motion area is an area where the second non-reference image has a gray value difference relative to the reference image;
a determination unit configured to determine a target motion region according to a comparison result between the first ratio and a second threshold and a comparison result between the second ratio and the second threshold; wherein the first proportion is a proportion occupied by the first motion region in the first non-reference image, the second proportion is a proportion occupied by the second motion region in the second non-reference image, and the target motion region includes at least one connected region;
the determining unit is further configured to obtain a first HDR image according to the first weight value and the second weight value; the first weighted value is the sum of the product of the saturation, the contrast and the exposure degree of each pixel point in the first non-reference image and the product of the saturation, the contrast and the exposure degree of each pixel point in the second non-reference image; or, the first weight value is a product of saturation, contrast and exposure of each pixel point in the first non-reference image; or, the first weight value is a product of saturation, contrast and exposure of each pixel point in the second non-reference image; the second weight value is the product of the saturation, the contrast and the exposure degree of each pixel point in the reference image.
Therefore, by applying the test question information acquisition method provided by the invention, a first image sequence of a first test question is obtained; processing the first image sequence to generate a first High Dynamic Range (HDR) image; acquiring image features of the first HDR image; and matching the image characteristics with a preset image characteristic model, and setting the first image in a first question bank when the matching is successful. The definition of the collected test questions is ensured.
Drawings
Fig. 1 is a flowchart of a test question information collection method according to an embodiment of the present invention;
FIG. 2 is a flowchart of step 120 provided by an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a test question information collecting device according to an embodiment of the present invention.
Detailed Description
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Fig. 1 is a flowchart of a test question information acquisition method according to an embodiment of the present invention. As shown in fig. 1, the test question information collection method includes the following steps:
step 110, a first image sequence of the first test question is obtained.
The mobile terminal can shoot the test questions into images by using a shooting function and store the images into a memory of the mobile terminal, and the first image sequence comprises: a reference picture, a first non-reference picture, and a second non-reference picture; the exposure duration of the first test question of the first non-reference image, the reference image and the second non-reference image is sequentially increased. In the prior art, whether an acquired image is clear needs to be judged, and when the image is not clear, shooting needs to be carried out again. In the method, the plurality of images about the first test question are acquired and synthesized, so that the problem that the images are not clear due to shaking of the test question image caused by shaking of hands of a test question photographer or turning of the test question by a third person is solved, and the efficiency is improved.
Step 120, processing the first image sequence to generate a first High-Dynamic Range (HDR) image.
Step 130, acquiring image features of the first HDR image.
Where image features are similar to image fingerprints and can be uniquely identified as "interesting parts" of other images, its precise definition is often dictated by the particular problem or application type. Repeatable detectability is the most important characteristic of image features: the extracted features should be the same for the same image regardless of angle, displacement, and shading. Common image features include color features, texture features, shape features, and spatial relationship features.
A color feature, a global feature, describes the surface properties of a scene corresponding to an image or image region. Such as a grey histogram, etc. Texture features, a global feature, describe the surface properties of the scene corresponding to an image or image region. Such as entropy, angular second moment, and local stationarity based on co-occurrence matrices. The shape feature is a local feature describing the shape property of the object in the local area. Such as boundary features, etc. The spatial relationship feature refers to a spatial position or a relative directional relationship between a plurality of objects divided in an image. These relationships may also be classified into join/abut relationships, overlap/repeat relationships, and contain/subsist relationships, among others.
And 140, matching the image characteristics with a preset image characteristic model, and setting the first image in a first question bank when the matching is successful.
Wherein, when the matching is successful, before the first image is arranged in the first question bank, the method further comprises: comparing the matching result with a first threshold value; when the matching result is larger than a first threshold value, the matching is successful; and when the matching result is not larger than the first threshold value, the matching is failed.
The first threshold may be set as needed, for example, 95%, and the higher the first threshold is set, the lower the matching success rate is. The preset image feature model may be established according to a convolutional neural network.
Therefore, the HDR image is obtained by processing the collected multiple test question pictures, and the definition of the collected test questions is ensured.
Step 120 is described in detail below. Fig. 2 is a flowchart of step 120 provided by the embodiment of the present invention. As shown in fig. 2, step 120 includes the following specific steps:
step 210, a first motion region and a second motion region of an image sequence are acquired.
The first motion area is an area where the first non-reference image has a difference in gray value with respect to the reference image, and the second motion area is an area where the second non-reference image has a difference in gray value with respect to the reference image.
It should be noted that, the present application only takes three images as an example for description, it is understood that the number of the images may be more than three, when the number of the images is more than three, the exposure time lengths of the first non-reference image and the second non-reference image are closest to the reference image, the exposure time length of the first non-reference image is less than that of the reference image, and the exposure time length of the second non-reference image is greater than that of the reference image.
Next, how to acquire the first non-reference image, the reference image, and the second non-reference image will be described in detail.
First, three images with gradually increasing exposure time of a target scene (test question), namely, an original first non-reference image, an original reference image and an original second non-reference image, may be captured by a terminal device, where the terminal device may be a device with a camera, including but not limited to a camera (e.g., a Digital camera), a video camera, a mobile phone (e.g., a smart phone), a tablet computer (Pad), a Personal Digital Assistant (PDA), a portable device (e.g., a portable computer), a wearable device, and the like, and this is not particularly limited in the embodiments of the present invention.
Next, the RGB images of the original first non-reference image, the original reference image, and the original second non-reference image are converted into grayscale images, respectively.
Then, on a brightness channel, Histogram Matching (HM) is performed on the gray scale map of the original first non-reference image by taking the gray scale map of the original reference image as a standard, the gray scale map of the original first non-reference image is brightened, HM is performed on the gray scale map of the original second non-reference image by taking the gray scale map of the original reference image as a standard, and the original second non-reference image is darkened.
And finally, acquiring homography matrixes of the brightened original first non-reference image, the original reference image and the dimmed original second non-reference image by using a surf characteristic point detection algorithm, and mapping the original first non-reference image, the original reference image and the original second non-reference image by using the homography matrixes respectively to align the images, so that the movement caused by the hand shake of an operator or the influence of test questions on the environment (such as the influence of wind) is eliminated, and the first non-reference image, the reference image and the second non-reference image are obtained.
Acquiring a first motion region and a second motion region of an image sequence specifically comprises:
the RGB maps of the first non-reference image, the reference image and the second non-reference image are converted into grayscale maps, respectively. And performing HM on a brightness channel, performing HM on the gray scale map of the first non-reference image by taking the gray scale map of the reference image as a standard, turning up the gray scale map of the first non-reference image, performing histogram matching on the gray scale map of the second non-reference image, and turning down the gray scale map of the second non-reference image. Finally, comparing the gray value difference between the gray level image of the first non-reference image after the brightness adjustment and the gray level image of the reference image, determining all the pixels with the gray level difference larger than the preset threshold value as the first motion area, correspondingly, comparing the gray value difference between the gray level image of the second non-reference image after the brightness adjustment and the gray level image of the reference image, determining all the pixels with the gray level difference larger than the preset threshold value as the second motion area, for example, the gray value of the first pixel in the gray level image of the second non-reference image after the brightness adjustment is 200, the gray value of the first pixel in the gray level image of the reference image is 100, and the gray value difference between the two is 100, and when the preset threshold is 50, the difference between the gray values of the two is larger than the preset threshold, so that the pixel value of the first pixel point is 1, and after all the pixel points with the pixel values of 1 are obtained, the dried second motion area is obtained through threshold filtering and corrosion expansion operation.
Step 220, determining the target motion area according to the comparison result between the first proportion and the second threshold and the comparison result between the second proportion and the second threshold.
The first proportion is the proportion occupied by the first motion area in the first non-reference image, the second proportion is the proportion occupied by the second motion area in the second non-reference image, and the target motion area comprises at least one connected area.
Step 230, obtaining a first HDR image according to the first weight value and the second weight value.
The first weighted value is the sum of the product of the saturation, the contrast and the exposure degree of each pixel point in the first non-reference image and the product of the saturation, the contrast and the exposure degree of each pixel point in the second non-reference image; or, the first weight value is a product of saturation, contrast and exposure of each pixel point in the first non-reference image; or, the first weight value is a product of saturation, contrast and exposure of each pixel point in the second non-reference image; the second weight value is the product of the saturation, the contrast and the exposure degree of each pixel point in the reference image.
The first proportion is the proportion of a first motion area in the first non-reference image, the second proportion is the proportion of a second motion area in the second non-reference image, and the target motion area comprises at least one connected area. In the field of image processing, Connected Component generally refers to an image area composed of foreground pixels with the same pixel value and adjacent positions in an image,
further, determining the target motion region according to a comparison result between the first proportion and a first threshold and a comparison result between the second proportion and the first threshold, including:
when the first proportion and the second proportion are not larger than the first threshold value, overlapping the first motion area and the second motion area, and determining the overlapped area as a target motion area; alternatively, the first and second electrodes may be,
when the first ratio is not larger than the first threshold value and the second ratio is larger than the first threshold value, determining that the first motion area is a target motion area; alternatively, the first and second electrodes may be,
and when the first ratio is larger than a first threshold value and the second ratio is not larger than the first threshold value, determining the second motion area as a target motion area.
Fig. 3 is a schematic structural diagram of a test question information collecting device according to an embodiment of the present invention. As shown in fig. 3, the test question information collecting apparatus 300 includes: an acquisition unit 310, a processing unit 320, a matching unit 330, a comparison unit 340 and a determination unit 350.
The acquiring unit 310 is configured to acquire a first image sequence of a first test question.
The processing unit 320 is configured to process the first sequence of images to generate a first high dynamic range HDR image.
The obtaining unit 310 is further configured to obtain an image feature of the first HDR image.
And the matching unit 330 is configured to match the image feature with a preset image feature model, and when the matching is successful, set the first image in a first question bank.
Further, the comparing unit 340 is configured to compare the matching result with a first threshold; when the matching result is larger than a first threshold value, the matching is successful; and when the matching result is not larger than the first threshold value, the matching is failed.
Further, the first image sequence includes: a reference picture, a first non-reference picture, and a second non-reference picture; the exposure duration of the first test question of the first non-reference image, the reference image and the second non-reference image is sequentially increased.
Further, the obtaining unit 310 is further configured to obtain a first motion region and a second motion region of the image sequence.
The first motion area is an area where the first non-reference image has a difference in gray value with respect to the reference image, and the second motion area is an area where the second non-reference image has a difference in gray value with respect to the reference image.
The determining unit 350 is configured to determine the target motion region according to a comparison result between the first ratio and the second threshold and a comparison result between the second ratio and the second threshold.
The first proportion is the proportion occupied by the first motion area in the first non-reference image, the second proportion is the proportion occupied by the second motion area in the second non-reference image, and the target motion area comprises at least one connected area.
The determining unit 350 is further configured to obtain a first HDR image according to the first weight value and the second weight value.
The first weighted value is the sum of the product of the saturation, the contrast and the exposure degree of each pixel point in the first non-reference image and the product of the saturation, the contrast and the exposure degree of each pixel point in the second non-reference image; or, the first weight value is a product of saturation, contrast and exposure of each pixel point in the first non-reference image; or, the first weight value is a product of saturation, contrast and exposure of each pixel point in the second non-reference image; the second weight value is the product of the saturation, the contrast and the exposure degree of each pixel point in the reference image.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, it should be understood that the above embodiments are merely exemplary embodiments of the present invention and are not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (8)

1. A test question information acquisition method is characterized by comprising the following steps:
acquiring a first image sequence of a first test question;
processing the first image sequence to generate a first High Dynamic Range (HDR) image;
acquiring image features of the first HDR image;
and matching the image characteristics with a preset image characteristic model, and setting the first image in a first question bank when the matching is successful.
2. The method for collecting question information according to claim 1, wherein said step of, when the matching is successful, setting said first image in a first question bank is preceded, further comprises:
comparing the matching result with a first threshold value;
when the matching result is larger than a first threshold value, the matching is successful;
and when the matching result is not larger than the first threshold value, the matching is failed.
3. The test question information acquisition method according to claim 1, characterized in that said first sequence of images comprises: a reference picture, a first non-reference picture, and a second non-reference picture; the exposure duration of the first test question of the first non-reference image, the reference image and the second non-reference image is sequentially increased.
4. The test question information acquisition method of claim 3, wherein the processing the first image sequence to generate a first HDR image comprises:
acquiring a first motion region and a second motion region of an image sequence; the first motion area is an area where the first non-reference image has a gray value difference relative to the reference image, and the second motion area is an area where the second non-reference image has a gray value difference relative to the reference image;
determining a target motion area according to a comparison result between the first proportion and a second threshold value and a comparison result between the second proportion and the second threshold value; wherein the first proportion is a proportion occupied by the first motion region in the first non-reference image, the second proportion is a proportion occupied by the second motion region in the second non-reference image, and the target motion region includes at least one connected region;
obtaining a first HDR image according to the first weight value and the second weight value; the first weighted value is the sum of the product of the saturation, the contrast and the exposure degree of each pixel point in the first non-reference image and the product of the saturation, the contrast and the exposure degree of each pixel point in the second non-reference image; or, the first weight value is a product of saturation, contrast and exposure of each pixel point in the first non-reference image; or, the first weight value is a product of saturation, contrast and exposure of each pixel point in the second non-reference image; the second weight value is the product of the saturation, the contrast and the exposure degree of each pixel point in the reference image.
5. An examination question information collecting apparatus, characterized in that the apparatus comprises:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a first image sequence of a first test question;
a processing unit for processing the first sequence of images to generate a first high dynamic range, HDR, image;
the acquiring unit is further used for acquiring image characteristics of the first HDR image;
and the matching unit is used for matching the image characteristics with a preset image characteristic model, and when the matching is successful, the first image is arranged in a first question bank.
6. The test question information collecting apparatus according to claim 5, characterized in that the apparatus further comprises: a comparison unit;
the comparison unit is used for comparing the matching result with a first threshold value;
when the matching result is larger than a first threshold value, the matching is successful;
and when the matching result is not larger than the first threshold value, the matching is failed.
7. The test question information acquisition apparatus according to claim 5, wherein the first image sequence includes: a reference picture, a first non-reference picture, and a second non-reference picture; the exposure duration of the first test question of the first non-reference image, the reference image and the second non-reference image is sequentially increased.
8. The test question information acquisition apparatus according to claim 7, wherein the acquisition unit is further configured to acquire a first motion region and a second motion region of the image sequence; the first motion area is an area where the first non-reference image has a gray value difference relative to the reference image, and the second motion area is an area where the second non-reference image has a gray value difference relative to the reference image;
a determination unit configured to determine a target motion region according to a comparison result between the first ratio and a second threshold and a comparison result between the second ratio and the second threshold; wherein the first proportion is a proportion occupied by the first motion region in the first non-reference image, the second proportion is a proportion occupied by the second motion region in the second non-reference image, and the target motion region includes at least one connected region;
the determining unit is further configured to obtain a first HDR image according to the first weight value and the second weight value; the first weighted value is the sum of the product of the saturation, the contrast and the exposure degree of each pixel point in the first non-reference image and the product of the saturation, the contrast and the exposure degree of each pixel point in the second non-reference image; or, the first weight value is a product of saturation, contrast and exposure of each pixel point in the first non-reference image; or, the first weight value is a product of saturation, contrast and exposure of each pixel point in the second non-reference image; the second weight value is the product of the saturation, the contrast and the exposure degree of each pixel point in the reference image.
CN202111026973.9A 2021-09-02 2021-09-02 Test question information acquisition method and device Pending CN113723539A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111026973.9A CN113723539A (en) 2021-09-02 2021-09-02 Test question information acquisition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111026973.9A CN113723539A (en) 2021-09-02 2021-09-02 Test question information acquisition method and device

Publications (1)

Publication Number Publication Date
CN113723539A true CN113723539A (en) 2021-11-30

Family

ID=78681007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111026973.9A Pending CN113723539A (en) 2021-09-02 2021-09-02 Test question information acquisition method and device

Country Status (1)

Country Link
CN (1) CN113723539A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093273A (en) * 2012-12-30 2013-05-08 信帧电子技术(北京)有限公司 Video people counting method based on body weight recognition
CN103914567A (en) * 2014-04-23 2014-07-09 北京奇虎科技有限公司 Objective test question answer matching method and objective test question answer matching device
CN106372609A (en) * 2016-09-05 2017-02-01 广东欧珀移动通信有限公司 Fingerprint template update method, fingerprint template update device and terminal equipment
CN107292248A (en) * 2017-06-05 2017-10-24 广州诚予国际市场信息研究有限公司 A kind of merchandise control method and system based on image recognition technology
CN108668093A (en) * 2017-03-31 2018-10-16 华为技术有限公司 The generation method and device of HDR image
US20190318460A1 (en) * 2016-12-22 2019-10-17 Huawei Technologies Co., Ltd. Method and apparatus for generating high dynamic range image
CN111915635A (en) * 2020-08-21 2020-11-10 广州云蝶科技有限公司 Test question analysis information generation method and system supporting self-examination paper marking
CN112418006A (en) * 2020-11-05 2021-02-26 北京迈格威科技有限公司 Target identification method, device and electronic system
CN113220921A (en) * 2021-06-03 2021-08-06 南京红松信息技术有限公司 Question bank input automation method based on text and image search

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093273A (en) * 2012-12-30 2013-05-08 信帧电子技术(北京)有限公司 Video people counting method based on body weight recognition
CN103914567A (en) * 2014-04-23 2014-07-09 北京奇虎科技有限公司 Objective test question answer matching method and objective test question answer matching device
CN106372609A (en) * 2016-09-05 2017-02-01 广东欧珀移动通信有限公司 Fingerprint template update method, fingerprint template update device and terminal equipment
US20190318460A1 (en) * 2016-12-22 2019-10-17 Huawei Technologies Co., Ltd. Method and apparatus for generating high dynamic range image
CN108668093A (en) * 2017-03-31 2018-10-16 华为技术有限公司 The generation method and device of HDR image
CN107292248A (en) * 2017-06-05 2017-10-24 广州诚予国际市场信息研究有限公司 A kind of merchandise control method and system based on image recognition technology
CN111915635A (en) * 2020-08-21 2020-11-10 广州云蝶科技有限公司 Test question analysis information generation method and system supporting self-examination paper marking
CN112418006A (en) * 2020-11-05 2021-02-26 北京迈格威科技有限公司 Target identification method, device and electronic system
CN113220921A (en) * 2021-06-03 2021-08-06 南京红松信息技术有限公司 Question bank input automation method based on text and image search

Similar Documents

Publication Publication Date Title
US11055827B2 (en) Image processing apparatus and method
Ignatov et al. Dslr-quality photos on mobile devices with deep convolutional networks
CN113992861B (en) Image processing method and image processing device
KR101662846B1 (en) Apparatus and method for generating bokeh in out-of-focus shooting
CN107945111B (en) Image stitching method based on SURF (speeded up robust features) feature extraction and CS-LBP (local binary Pattern) descriptor
CN111311523B (en) Image processing method, device and system and electronic equipment
KR20090111136A (en) Method and Apparatus of Selecting Best Image
WO2020253618A1 (en) Video jitter detection method and device
CN113902657A (en) Image splicing method and device and electronic equipment
An et al. Single-shot high dynamic range imaging via deep convolutional neural network
CN109242787A (en) It paints in a kind of assessment of middle and primary schools' art input method
CN111028276A (en) Image alignment method and device, storage medium and electronic equipment
CN113313626A (en) Image processing method, image processing device, electronic equipment and storage medium
CN117496019B (en) Image animation processing method and system for driving static image
CN111062926B (en) Video data processing method, device and storage medium
Xue Blind image deblurring: a review
CN116506732B (en) Image snapshot anti-shake method, device and system and computer equipment
CN111179245B (en) Image quality detection method, device, electronic equipment and storage medium
CN113723539A (en) Test question information acquisition method and device
CN111080683A (en) Image processing method, image processing device, storage medium and electronic equipment
CN111476056A (en) Target object identification method and device, terminal equipment and computer storage medium
CN115496664A (en) Model training method and device, electronic equipment and readable storage medium
CN116977190A (en) Image processing method, apparatus, device, storage medium, and program product
CN111988520B (en) Picture switching method and device, electronic equipment and storage medium
CN109685839B (en) Image alignment method, mobile terminal and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 1002, floor 10, block B, No. 18, Zhongguancun Street, Haidian District, Beijing 100044

Applicant after: Beijing Biyun shuchuang Technology Co.,Ltd.

Address before: Room 1002, floor 10, block B, No. 18, Zhongguancun Street, Haidian District, Beijing 100044

Applicant before: Beijing yundie Zhixue Technology Co.,Ltd.