WO2021218416A1 - Readable storage medium, display device and image processing method therefor - Google Patents
Readable storage medium, display device and image processing method therefor Download PDFInfo
- Publication number
- WO2021218416A1 WO2021218416A1 PCT/CN2021/079894 CN2021079894W WO2021218416A1 WO 2021218416 A1 WO2021218416 A1 WO 2021218416A1 CN 2021079894 W CN2021079894 W CN 2021079894W WO 2021218416 A1 WO2021218416 A1 WO 2021218416A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- area
- initial image
- image
- target feature
- target
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 43
- 238000009966 trimming Methods 0.000 claims description 27
- 230000009471 action Effects 0.000 claims description 15
- 238000004458 analytical method Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 8
- 230000004044 response Effects 0.000 claims description 3
- 230000006870 function Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 238000000034 method Methods 0.000 description 7
- 230000001815 facial effect Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 230000000717 retained effect Effects 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 241001156002 Anthonomus pomorum Species 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the present disclosure relates to the field of display technology, and in particular, to a readable storage medium, a display device, and an image processing method of the display device.
- the purpose of the present disclosure is to provide a readable storage medium, a display device, and an image processing method of the display device.
- an image processing method of a display device including:
- the initial image is cropped according to the cropping ratio and the focus area to obtain a target image, which is displayed on the display device.
- determining the focus area according to the recognition result includes:
- the target feature does not exist in the initial image, use an area in the initial image whose definition reaches a specified threshold as the focus area;
- the focus area is determined according to the proportion of the target feature in the initial image.
- determining the focus area according to the proportion of the target feature in the initial image includes:
- the initial image is used as the focus area
- the area of the target feature is used as the focus area
- the proportion of the target feature in the initial image is less than the minimum value of the threshold range, then the area in the initial image whose definition reaches the specified threshold is taken as the focus area.
- using an area in the initial image whose sharpness reaches a specified threshold as the focus area includes:
- a line scanning action is performed through a scanning window, and the line scanning action includes: starting from the first starting area, aligning in a row direction
- the initial image is scanned to obtain the definition of a line scanning area and each line acquisition area in the line scanning area; the boundary of the line scanning area in the line direction coincides with the boundary of the initial image in the line direction;
- the line acquisition area is an area corresponding to the scan window in the line scan area;
- the definition of each of the line acquisition areas in the line scanning area does not reach the specified threshold, then reselect the first starting area from the areas where the definition has never been obtained, and perform the line scanning action until it is determined Out of the first reference area; the distance in the column direction between the reselected first starting area and the first starting area before reselecting is an integer multiple of the height of the scan window in the column direction;
- the boundary of the column scanning area in the column direction coincides with the boundary of the initial image in the column direction;
- the column acquisition area is an area in the column scanning area corresponding to the scanning window;
- the focus area is determined according to the first reference area and the second reference area.
- the boundary of the focus region in the row direction is the outer boundary of the two outermost first reference regions in the row direction;
- the boundary of the focus area in the column direction is the outer boundary of the two outermost second reference areas in the column direction.
- the second starting area is the first reference area
- the center of the second starting region coincides with the midpoint of the center lines of the two outermost first reference regions in the row direction.
- clipping the initial image according to the clipping ratio and the focus area includes:
- the initial image is cropped along the edge of the target cropping area.
- the image processing method further includes:
- the initial image is cropped according to the cropping area parameter.
- the target feature includes a human face.
- a display device including an image processing device, the image processing device including:
- the image acquisition unit is used to acquire the initial image
- the parameter acquisition unit is used to acquire the tailoring ratio and target characteristics
- a recognition unit configured to recognize the target feature in the initial image, and determine the focus area according to the recognition result
- the processing unit is configured to trim the initial image according to the trimming ratio and the focus area to obtain and display a target image.
- the identification unit includes:
- An image recognition module for recognizing target features in the initial image
- the analysis module is configured to, when the target feature does not exist in the initial image, use an area in the initial image whose definition reaches a specified threshold as the focus area; and when the target feature exists in the initial image, The focus area is determined according to the proportion of the target feature in the initial image.
- the analysis module includes:
- a comparison circuit for comparing the proportion of the target feature in the initial image with a threshold range
- the execution circuit is configured to use the initial image as the focus area when the proportion of the target feature in the initial image is greater than the maximum value of the threshold range; when the target feature is in the initial image When the proportion is within the threshold range, the area of the target feature is used as the focus area; and when the proportion of the target feature in the initial image is less than the minimum value of the threshold range, the initial The area in the image whose sharpness reaches the specified threshold is regarded as the focus area.
- a readable storage medium on which a computer program is stored, and the computer program implements the image processing method described in any one of the above when the computer program is executed.
- FIG. 1 is a flowchart of an embodiment of the image processing method of the present disclosure.
- FIG. 2 is a schematic diagram of a display device and a photographing terminal in an embodiment of the image processing method of the present disclosure.
- FIG. 3 is a flowchart of step S130 in an embodiment of the image processing method of the present disclosure.
- FIG. 4 is a flowchart of step S1310 in an embodiment of the image processing method of the present disclosure.
- FIG. 5 is a flowchart of step S1320 in an embodiment of the image processing method of the present disclosure.
- FIG. 6 is a flowchart of step S140 in an embodiment of the image processing method of the present disclosure.
- FIG. 7 is a schematic diagram of the working principle of an embodiment of the image processing method of the present disclosure.
- FIG. 8 is a schematic diagram of an embodiment of the display device of the present disclosure.
- FIG. 9 is a schematic diagram of an identification unit in an embodiment of the display device of the present disclosure.
- FIG. 10 is a schematic diagram of an analysis module in an embodiment of the display device of the present disclosure.
- FIG. 11 is a schematic diagram of an embodiment of the display system of the present disclosure.
- FIG. 12 is a schematic diagram of an embodiment of a readable storage medium of the present disclosure.
- the embodiments of the present disclosure provide an image processing method for a display device.
- the display device may be a screen, a TV, a computer, a mobile phone, etc., as long as the image can be acquired and displayed, and there is no special limitation here.
- the image processing method of the embodiment of the present disclosure includes step S110-step S140, wherein:
- Step S110 Obtain an initial image
- Step S120 Obtain a tailoring ratio and target characteristics
- Step S130 Identify the target feature in the initial image, and determine the focus area according to the recognition result
- Step S140 Clip the initial image according to the clipping ratio and the focus area to obtain a target image, and display it on the display device.
- the image processing method of the embodiment of the present disclosure can determine the focus area in the initial image based on the recognition result of the target feature, and then crop the initial image according to the focus area and the cropping ratio, so as to obtain a target image that matches the display device more. Improve the display effect.
- step S110 an initial image is acquired.
- the initial image can come from a terminal device, which can be a camera terminal or other terminal, and the camera terminal can be a device that can take images such as a camera, a mobile phone, or a tablet computer.
- a terminal device which can be a camera terminal or other terminal
- the camera terminal can be a device that can take images such as a camera, a mobile phone, or a tablet computer.
- the terminal device is a camera terminal, which can communicate with the display device through wireless communication to transmit the image taken by the terminal device to the display device.
- the wireless communication method can be wifi, Bluetooth, etc.
- the specific type is not specifically limited here.
- the image can also be transmitted through a physical interface with a wired connection.
- the initial image is any image taken by the camera terminal, and its specific parameters and size are not specifically limited here.
- the initial image may be a group photo of a person, a landscape photo, etc.
- Obtaining the initial image refers to acquiring the image data of the initial image.
- the photographing terminal 121 may be a single-lens reflex camera, which has a wifi transmission module 122.
- the display device 111 may be a picture screen, and the display device 111 may have a wifi signal processing module 112 and a system-on-a-chip (SOC, System-on-a-Chip) that can match and communicate with the wifi transmission module 122, and may also include an image processing device 114 And display module 115.
- the display device 111 can package the initial image through the wifi transmission module 122 and load it into a wifi signal of 2.4g, 5g or other frequency bands and send it to the wifi signal processing module 112 of the display device 111.
- the image data is transmitted to the system chip 113 so that the system chip 113 generates the image data of the initial image, which is transmitted to the image processing device 114 to execute the image processing method of the present disclosure, perform image processing, and then display the target image through the display module 115.
- step S120 the cropping ratio and the target feature are acquired.
- Both the tailoring ratio and the target feature can be pre-stored in a remote terminal by the user through an operation on a tailoring terminal, and the remote terminal may be a remote server.
- the cropping ratio can be a user-defined ratio, or a ratio determined according to the display mode of the display device, and the cropping ratio determined according to the display mode can be used as the default cropping ratio. When the user does not set a custom cropping ratio, directly Use the default crop ratio.
- the display mode depends on the type and placement of the display device.
- the display device is a drawing screen, and its display modes include landscape mode and portrait mode.
- the cropping ratio can be 9:16, for portrait mode.
- the cropping ratio can be 16:9.
- the cropping ratio can be used as the cropping area ratio when cropping the initial image.
- the display mode can be preset, and at the same time, the mapping relationship between the display mode and the cropping ratio can be established, so that as long as the display mode is set, the corresponding Cropping ratio.
- the placement mode of the display device can also be sensed by a device such as a position sensor, and based on a pre-established mapping of placement mode, display mode, and clipping ratio, the clipping can be determined in real time. ratio.
- the target feature can be used as a reference standard for determining the focus area of the initial image, so that the target feature can be retained to the greatest extent after cropping. It can be a human face, a specific type of animal, a specific type of plant, a building, etc., which are not specifically limited here.
- the user can select on the tailoring terminal and store the target feature information in the remote terminal for recall. For example, the user can select the target feature of the face on the tailoring terminal and upload it to the remote terminal.
- step S130 the target feature is recognized in the initial image, and the focus area is determined according to the recognition result.
- the target feature can be identified in the initial image.
- the target feature stored by the remote terminal is a human face
- the human face can be identified in the initial image.
- the human face is not the facial feature of a specific user, but only refers to the recognition mode. Recognizing the facial area of any person belongs to face recognition.
- the specific process of face recognition is not specifically limited here, as long as it can perform face detection on the initial image.
- the face may be a facial feature of a specific user, for example, the facial feature of the user "Zhang San". Only when the facial feature of "Zhang San" is recognized in the initial image can it be determined that there is a facial feature in the initial image.
- Target characteristics The specific technical details of face recognition for a specific user are not specifically limited here, as long as the function can be realized. The user can set the specific content of the target feature by operating on the tailoring terminal.
- the target feature may or may not exist in the initial image. Therefore, the recognition result does not necessarily include the target feature.
- the target feature can be used as the reference standard, while for the initial image without the target feature, the target feature cannot be used as the reference standard, and the area that needs to be retained can be determined by other methods.
- determining that the target feature does not exist in the initial image includes: actually there is no image data about the target feature, and there is image data of the target feature, but the amount of data is too small, for example, less than a specified value, so It can be considered that there is no target feature.
- determining the focus area according to the recognition result includes step S1310 and step S1320, where:
- Step S1310 If the target feature does not exist in the initial image, use an area in the initial image whose definition reaches a specified threshold as a focus area.
- Clarity can be characterized by variance function, gray variance function, gray variance product function, Brenner gradient function or other functions.
- the specific algorithm is not specifically limited here. The higher the resolution, the clearer the initial image and the lower the resolution. , The more blurry the initial image.
- a scanning window can be used to scan the initial image from the row direction and the column direction to obtain the definition of the region corresponding to the scanning window, where the region whose definition reaches the specified threshold is determined as a reference Area, the reference area can be regarded as a more important area in the initial image, and should be cropped based on the reference area. For example, you can first determine the area where the sharpness reaches the specified threshold along the row direction, and then determine the area where the sharpness reaches the specified threshold along the column direction based on the area where the sharpness in the row direction reaches the specified threshold; through the row and column directions, the sharpness reaches the specified threshold The threshold area can be used to construct the focus area.
- row direction and the column direction only refer to two different directions that cross each other. Those skilled in the art can know that as the position of the display device changes, the actual orientation of the row direction and the column direction may also change.
- step S1310 the area in the initial image whose sharpness reaches the specified threshold is taken as the focus area, that is, step S1310, which may include step S13110-step S13170, in which:
- Step S13110 taking an area in the initial image where the definition has not been acquired as the first starting area, and performing a line scanning action through a scanning window.
- the line scanning action includes: starting from the first starting area and moving along Scan the initial image in the row direction to obtain the definition of the row scan area and each row acquisition area in the row scan area; the boundary of the row scan area in the row direction and the boundary of the initial image in the row direction Coincidence; the line acquisition area is an area corresponding to the scan window in the line scan area.
- the scan window is used to obtain the definition of its corresponding area, and the shape and size of the first starting area are the same as those of the scan window, depending on the scan window.
- the first starting area may be located at the middle position of the initial image in the column direction and at the boundary position in the row direction, of course, it may also be other positions.
- the scanning window can be translated along the row direction by a specified length at a time, and the specified length can be equal to the width of the scanning window in the row direction.
- the scan window can also move in the row direction in other ways.
- the area scanned by the scan window in the line direction is a line scan area. Since the boundary of the line scan area in the line direction coincides with the boundary of the original image in the line direction, it is possible to avoid unscanned areas in the line direction. Since the line scanning area is formed by the movement of the scanning window, the line scanning area has a plurality of line acquisition areas corresponding to the scanning windows at various positions, and each line acquisition area is an area where the definition has been acquired. Each time the scanning window moves along the row direction, the row acquisition area is synchronously increased by one, and the definition of the row acquisition area is obtained.
- Step S13120 It is judged whether the definition of each line acquisition area in the line scanning area reaches a specified threshold.
- the definition can be compared with a specified threshold to determine whether the definition reaches the specified threshold, that is, whether there is a collection area with a definition not less than the specified threshold.
- Step S13130 If there is a line acquisition area whose definition reaches the specified threshold value in the line scanning area, a first reference area is determined according to the line acquisition area whose definition reaches the specified threshold value.
- the position of the corresponding line acquisition area is recorded, so as to determine each line acquisition area in the travel scanning area whose definition reaches the specified threshold.
- the line acquisition area whose sharpness reaches the specified threshold can be used as the first reference area to limit the range of the focus area in the line direction.
- the number of the first reference area is one
- the boundary of the focus area in the row direction is the extension line of the boundary of the first reference area in the row direction along the column direction, that is, the definition reaches the specified threshold.
- the number of the first reference regions is multiple, and the boundary of the focus region in the row direction is the extension line of the outer boundary of the two outermost first reference regions in the row direction along the column direction.
- the two outermost first reference regions are the two first reference regions furthest apart in the row direction, and the outer boundaries of the two first reference regions in the row direction are the opposite boundaries in the row direction.
- the first reference area may be a plurality of continuous acquisition areas, of course, it may also be a plurality of discrete acquisition areas.
- Step S13140 If the definition of each of the line acquisition areas of the line scan area does not reach the specified threshold, reselect the first starting area from the areas where the definition has never been obtained, and execute the line scan action , Until the first reference area is determined; the distance in the column direction between the reselected first starting area and the first starting area before reselecting is an integer multiple of the height of the scan window in the column direction.
- the scan window after the last row scan operation can be moved in the column direction
- the preset distance take the area corresponding to the scan window after the preset distance is moved as the new first starting area, and execute the scan action. Accordingly, step S13120 and step S13130 are executed again, until the first reference area is determined, stop scanning.
- the preset distance may be an integer multiple of the height of the scan window in the column direction, for example, 1 time, 2 times, and so on.
- the area after the scanning window moves along the column direction is an area where the definition has not been obtained, so as to avoid repeatedly obtaining the definition of the same area.
- Step S13150 Determine a second starting area according to the first reference area, and scan the initial image in the column direction through the scanning window to obtain the column scanning area and the acquisition area of each column in the column scanning area. Definition; the boundary of the column scanning area in the column direction coincides with the boundary of the initial image in the column direction; the column acquisition area is an area corresponding to the scanning window in the column scanning area.
- the shape and size of the second starting area are defined by the scan window, that is, the shape and size of the two are the same.
- scan the initial image along the column direction to obtain the definition of the area of the scanning window corresponding to different positions.
- the scanning window can move towards the second starting area along the column direction.
- the two sides of, simultaneously translate a specified length each time, and the specified length can be equal to the height of the scan window in the column direction.
- the scan window can also move in the column direction in other ways.
- the column scanning area includes a plurality of column acquisition areas corresponding to different positions of the scanning window.
- the specific principles of the formation of the column scanning area and the column acquisition area and the acquisition of definition can refer to the principle of the row scanning action in step S13140, which will not be repeated here. Detailed.
- a scan window of the same size and shape as in step S13140 may be used in step S13150, but not limited to this.
- when scanning is performed along the column direction It is also possible to use a scanning window other than scanning in the row direction.
- the number of the first reference area is one, and the second starting area is the first reference area.
- the number of the first reference area is multiple, and the second starting area is: the center line of the two outermost first reference areas in the row direction corresponds to the midpoint of the center line of the first reference area. An area with the same size as the scanning window, and the center of the second starting area coincides with the midpoint.
- the two outermost first reference areas are the two first reference areas furthest apart in the row direction, and the center lines of the outermost two first reference areas are the line connecting the centers of the two first reference areas.
- Step S13160 Use a column acquisition area whose definition reaches a specified threshold in the column scanning area as a second reference area.
- the definition can be compared with the specified threshold to determine whether the definition reaches the specified threshold, that is, whether there is a column collection area with a definition not less than the specified threshold.
- the column acquisition area whose sharpness reaches the specified threshold is used as the second reference area and recorded to limit the range of the focus area in the column direction.
- the boundary of the focus region in the column direction is an extension line of the outer boundaries of the two outermost second reference regions in the row direction.
- the two outermost second reference areas are the two second reference areas farthest in the column direction, and the outer boundary of the outermost second reference area is the opposite of the two second reference areas in the column direction. boundary.
- Step S13170 Determine the focus area according to the first reference area and the second reference area.
- the boundary of the focus area can be determined through the first reference area and the second reference area.
- the boundary of the focus area in the row direction is the extension line of the outer boundaries of the two outermost first reference areas along the column direction;
- the second reference area is multiple One, the boundary of the focus area in the column direction is the outer boundary of the outermost two second reference areas extending in the row direction, and the area enclosed by the two extension lines may be the focus area.
- the focus area can also be determined based on the outer boundaries of the two outermost first reference areas and the outer boundaries of the two outermost second reference areas through a preset algorithm or image processing method.
- each column acquisition area in the column scanning area can also be sorted, and one or more column acquisition areas with the highest definition are used as the second reference area. Less than the total number of column collection areas.
- Step S1320 If the target feature exists in the initial image, determine the focus area according to the proportion of the target feature in the initial image.
- the focus area needs to be determined based on the size of the area where the target feature is located. For example, if the area occupied by the target feature is larger, the focus area can be determined based on the target feature area. Consider whether the sharpness of the target feature area reaches the specified threshold. If the area occupied by the target feature is too small, it means that it is not the main element of the initial image, so the target feature can be ignored, and the focus area is determined according to the area whose sharpness reaches the specified threshold.
- the proportion of the target feature in the initial image is the total proportion of all target features in the initial image, that is, the sum of the proportions of each target feature in the initial image.
- step S1320 includes step S13210-step S13240, wherein:
- Step S13210 Compare the proportion of the target feature in the initial image with a threshold range.
- the proportion of the target feature in the initial image is the ratio of the size of the area of the target feature to the total size of the initial image.
- the proportion may be the ratio of the area of the target feature area to the total area of the initial image; or, the proportion may be the ratio of the number of pixels in the area of the target feature to the number of pixels in the initial image.
- the specific value of the threshold range is not specifically limited here.
- the proportion of the target feature in the initial image can be expressed as a percentage.
- the minimum value of the threshold range can be 5% and the maximum value is 25%, that is, the threshold range is 5%-25%.
- Step S13220 If the proportion of the target feature in the initial image is greater than the maximum value of the threshold range, the initial image is used as the focus area.
- the proportion of the target feature in the initial image is greater than the maximum value of the threshold range, indicating that the target feature occupies more areas in the initial image. Therefore, the initial image can be directly used as the focus area, regardless of whether the definition reaches the specified threshold. Taking the target feature as a human face and the threshold range is 5%-25% as an example, if the proportion of the human face in the initial image is greater than 25%, it means that the initial image is a close-up image of a human face. In this case, the initial image can be directly used as the focus area .
- Step S13230 If the proportion of the target feature in the initial image is within the threshold range, the area of the target feature is used as the focus area.
- the target feature as a human face and the threshold range is 5%-25% as an example, if the proportion of the human face in the initial image is not less than 5% and not more than 25%, it means that the initial image is an ordinary person image. Therefore, the area of the target feature can be directly used as the focus area, instead of specifically determining the focus area, the boundary of the focus area is the boundary of the area of the target feature.
- Step S13240 If the proportion of the target feature in the initial image is less than the minimum value of the threshold range, use an area in the initial image whose definition reaches a specified threshold as a focus area.
- the target feature as a face and the threshold range is 5%-25% as an example
- the proportion of the face in the initial image is less than 5%, it means that the subject of the initial image is not a person, or most of the face data is lost, and it is difficult to base
- the human face determines the focus area. Therefore, the area whose definition reaches the specified threshold in the initial image can be used as the focus area, and the area whose definition reaches the specified threshold in the initial image is used as the focus area. It is explained in detail in, so I won’t repeat it here.
- step S140 the initial image is cropped according to the cropping ratio and the focus area to obtain a target image, which is displayed on the display device.
- the focus area can be used to determine the basis of trimming, and the trimming ratio can be used to determine the size of the trimming range, that is, the size of the trimming area.
- step S140 which may include step S1410-step S1430, wherein:
- Step S1410 Generate a plurality of clipping regions in the initial image according to the clipping ratio, and two opposite edges of each clipping region coincide with the edges of the initial image.
- the cropping area can be regarded as a window conforming to the cropping ratio. For example, for a rectangular display device in portrait mode and the cropping ratio is 16:9, the cropping area is a window with an aspect ratio of 16:9. The size of each cropping area is the same.
- the two opposite edges of the cropping area can be overlapped with the edge of the original image. The edges of the image overlap, as long as the cropping ratio can be satisfied, so that an image that meets the cropping ratio can be cropped on the basis of preserving the overlapped edges, and the maximum cropping can be achieved.
- the initial image and the cropped area are both rectangular, the two opposite edges of the cropped area may coincide with the two opposite edges in the initial image, and the other edges and the overlapped edges conform to the crop ratio.
- the overlap in this embodiment only refers to the collinear edges, and is not limited to the same length of the edges.
- step S1420 the cropping area with the smallest distance from the center of the focus area is taken as the target cropping area.
- the minimum value of the center distance may be zero, that is, the center of the cropping area may coincide with the center of the focus area.
- Step S1430 crop the initial image along the edge of the target crop area.
- the target image is displayed on the display device, if the crop ratio is the crop ratio determined according to the display mode of the display device , The size of the target image can be matched with the display device; at the same time, because the target image is determined based on the focus area, the target features in the initial image can be retained to the greatest extent, the loss caused by clipping can be reduced, and the display device's performance can be guaranteed. display effect.
- the initial image can also be tailored through user-defined operations.
- the image processing method further includes step S150-step S170, wherein:
- Step S150 Acquire and display the initial image and the target image through a tailoring terminal.
- the tailoring terminal can be a mobile phone, a tablet computer or other electronic equipment that can realize human-computer interaction, and it is three independent devices with the camera terminal and the display device.
- the cutting terminal can communicate with the display device and the camera terminal at the same time, and can acquire and display the initial image, and if there is a target image, it can also acquire and display the target image.
- Step S160 Receive the cropping area parameter generated by the cropping terminal in response to the user's operation.
- the trimming area parameters include at least the location and trimming ratio of the trimming area, and the trimming ratio is a user-defined trimming ratio, such as 4:3, 16:9, 7:5, etc.
- a custom trimming area parameter can be generated and sent to the display device.
- Step S170 Cropping the initial image according to the cropping area parameter.
- the trimming action can be performed directly according to the trimming area parameter, and the initial image is trimmed according to the location and trimming ratio of the trimming area included in the trimming area parameter to obtain the target image and display it on the display device.
- the user can directly perform the custom trimming operation, or first confirm the execution of the smart trimming operation, that is, perform step S110-step S140 to obtain the target image, if The target image does not meet the requirements of the user, or the target image cannot be output, and the above-mentioned custom clipping operation is executed through the clipping terminal to regenerate the target image.
- Fig. 7 shows the working project of an embodiment of the image processing method of the present disclosure, which includes steps S200-S290 and steps S300-step S370, the specific principles of which have been described in detail above.
- the display device 100 may include an image processing device 101.
- the image processing device 101 includes an image acquisition unit 1, a parameter acquisition unit 2, an identification unit 3, and a processing unit 4. in:
- the image acquisition unit 1 is used to acquire an initial image.
- the parameter acquisition unit 2 is used to acquire the tailoring ratio and the target feature.
- the recognition unit 3 is used for recognizing the target feature in the initial image, and determining the focus area according to the recognition result.
- the processing unit 4 is configured to cut the initial image according to the cut ratio and the focus area to obtain a target image, and display it on the display device.
- the display device 100 of the embodiment of the present disclosure can determine the focus area in the initial image based on the recognition result of the target feature, and then crop the initial image according to the focus area and the cropping ratio, so as to obtain a target image that better matches the display device. Improve the display effect.
- the recognition unit 3 includes an image recognition module 31 and an analysis module 32, wherein:
- the image recognition module 31 is used to recognize target features in the initial image.
- the analysis module 32 is configured to, when the target feature does not exist in the initial image, use an area in the initial image whose definition reaches a specified threshold as the focus area; and when the target feature exists in the initial image, The focus area is determined according to the proportion of the target feature in the initial image.
- the analysis module 32 includes a comparison circuit 321 and an execution circuit 322, wherein:
- the comparison circuit 321 is configured to compare the proportion of the target feature in the initial image with a threshold range
- the execution circuit 322 is configured to use the initial image as the focus area when the proportion of the target feature in the initial image is greater than the maximum value of the threshold range; When the proportion is within the threshold range, the target feature is used as the focus area; and when the proportion of the target feature in the initial image is less than the minimum value of the threshold range, the initial image is The area where the sharpness reaches the specified threshold is regarded as the focus area.
- the image processing device 101 in the embodiment of the present disclosure may be the image processing device 114 in FIG. 2.
- modules, units, or circuits of the device for action execution are mentioned in the above detailed description, this division is not mandatory.
- the features and functions of two or more modules, units or circuits described above may be embodied in one module, unit or circuit.
- the features and functions of a module, unit or circuit described above can be further divided into multiple modules, units or circuits to be embodied.
- the display system may include a display device 100 and a camera terminal 200, wherein:
- the display device 100 includes the image processing device 101 of the image processing method of any of the above embodiments. Therefore, the image processing device 101 may be the image processing device 101 of any of the above embodiments. The specific details of the image processing method and the image processing device 101 are here. No longer.
- the camera terminal 200 can be used to take an initial image, and it can be a camera, a mobile phone, a tablet computer, etc., as long as the image can be taken and transmitted.
- the content and form of the initial image are not specifically limited here.
- the display system may further include a tailoring terminal 300 and a remote terminal 400.
- the functions and specific details of the tailoring terminal 300 and the remote terminal 400 have been described in the above image processing method implementation. It is explained in, so I won’t repeat it here.
- the embodiments of the present disclosure also provide a readable storage medium on which a computer program is stored, and when the computer program is executed, the image processing method of any of the foregoing embodiments is implemented.
- the various aspects of the present invention can also be implemented in the form of a program product, which includes program code, and when the program product runs on a terminal device, the program code is used to make the terminal device.
- the readable storage medium 500 can be any tangible medium that contains or stores a program, and the program can be used by or in combination with an instruction execution system, device, or device.
- the program product can use any combination of one or more readable media.
- the readable medium may be a readable signal medium or a readable storage medium.
- the readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Type programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- the computer-readable signal medium may include a data signal propagated in baseband or as a part of a carrier wave, and readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- the readable signal medium may also be any readable medium other than a readable storage medium, and the readable medium may send, propagate, or transmit a program for use by or in combination with the instruction execution system, apparatus, or device.
- the program code contained on the readable medium can be transmitted by any suitable medium, including but not limited to wireless, wired, optical cable, RF, etc., or any suitable combination of the foregoing.
- the program code used to perform the operations of the present invention can be written in any combination of one or more programming languages.
- the programming languages include object-oriented programming languages—such as Java, C++, etc., as well as conventional procedural styles. Programming language-such as "C" language or similar programming language.
- the program code can be executed entirely on the user's computing device, partly on the user's device, executed as an independent software package, partly on the user's computing device and partly executed on the remote computing device, or entirely on the remote computing device or server Executed on.
- the remote computing device can be connected to a user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computing device (for example, using Internet service providers). Business to connect via the Internet).
- LAN local area network
- WAN wide area network
- Internet service providers for example, using Internet service providers.
- the example embodiments described here can be implemented by software, or can be implemented by combining software with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, U disk, mobile hard disk, etc.) or on the network , Including several instructions to make a computing device (which may be a personal computer, a server, a mobile terminal, or a network device, etc.) execute the method according to the embodiments of the present disclosure.
- a non-volatile storage medium which can be a CD-ROM, U disk, mobile hard disk, etc.
- Including several instructions to make a computing device which may be a personal computer, a server, a mobile terminal, or a network device, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (13)
- 一种显示装置的图像处理方法,其中,包括:An image processing method for a display device, which includes:获取初始图像;Get the initial image;获取剪裁比例和目标特征;Obtain tailoring ratio and target characteristics;在所述初始图像中识别目标特征,并根据识别结果确定焦点区域;Identifying target features in the initial image, and determining the focus area according to the recognition result;根据所述剪裁比例和所述焦点区域对所述初始图像进行剪裁,得到目标图像,并在所述显示装置上显示。The initial image is cropped according to the cropping ratio and the focus area to obtain a target image, which is displayed on the display device.
- 根据权利要求1所述的图像处理方法,其中,根据识别结果确定焦点区域,包括:The image processing method according to claim 1, wherein determining the focus area according to the recognition result comprises:若所述初始图像中不存在所述目标特征,则以所述初始图像中清晰度达到指定阈值的区域作为焦点区域;If the target feature does not exist in the initial image, use an area in the initial image whose definition reaches a specified threshold as the focus area;若所述初始图像中存在所述目标特征,则根据所述目标特征在所述初始图像中的占比确定焦点区域。If the target feature exists in the initial image, the focus area is determined according to the proportion of the target feature in the initial image.
- 根据权利要求2所述的图像处理方法,其中,根据所述目标特征在所述初始图像中的占比确定焦点区域,包括:The image processing method according to claim 2, wherein determining the focus area according to the proportion of the target feature in the initial image comprises:将所述目标特征在所述初始图像中的占比与一阈值范围比较;Comparing the proportion of the target feature in the initial image with a threshold range;若所述目标特征在所述初始图像中的占比大于所述阈值范围的最大值,则以所述初始图像作为焦点区域;If the proportion of the target feature in the initial image is greater than the maximum value of the threshold range, the initial image is used as the focus area;若所述目标特征在所述初始图像中的占比位于所述阈值范围内,则以所述目标特征的区域作为焦点区域;If the proportion of the target feature in the initial image is within the threshold range, the area of the target feature is used as the focus area;若所述目标特征在所述初始图像中的占比小于所述阈值范围的最小值,则以所述初始图像中清晰度达到指定阈值的区域作为焦点区域。If the proportion of the target feature in the initial image is less than the minimum value of the threshold range, then the area in the initial image whose definition reaches the specified threshold is taken as the focus area.
- 根据权利要求2或3所述的图像处理方法,其中,以所述初始图像中清晰度达到指定阈值的区域作为焦点区域,包括:The image processing method according to claim 2 or 3, wherein the use of an area in the initial image whose sharpness reaches a specified threshold as the focus area includes:以所述初始图像中未被获取过清晰度的一区域为第一起始区域,通过一扫描窗口执行行扫描动作,所述行扫描动作包括:从所述第一起始区域开始,沿行方向对所述初始图像进行扫描,得到行扫描区域和所述行扫描区域中每个行采集区的清晰度;所述行扫描区域在行方向上的边界与所述初始图像在行方向上的边界重合;所述行采集区为所述行扫描区域中对应所述扫描窗口的区域;Taking an area in the initial image where the definition has not been acquired as the first starting area, a line scanning action is performed through a scanning window, and the line scanning action includes: starting from the first starting area, aligning in a row direction The initial image is scanned to obtain the definition of a line scanning area and each line acquisition area in the line scanning area; the boundary of the line scanning area in the line direction coincides with the boundary of the initial image in the line direction; The line acquisition area is an area corresponding to the scan window in the line scan area;判断所述行扫描区域中各所述行采集区的清晰度是否达到指定阈值;Judging whether the sharpness of each of the line acquisition areas in the line scanning area reaches a specified threshold;若所述行扫描区域中存在清晰度达到所述指定阈值的行采集区,则根据清晰度达到所述指定阈值的行采集区确定第一参考区域;If there is a line acquisition area whose definition reaches the specified threshold in the line scanning area, determining the first reference area according to the line acquisition area whose definition reaches the specified threshold;若所述行扫描区域中各所述行采集区的清晰度未达到所述指定阈值,则从未获取过清晰度的区域中重新选择第一起始区域,并执行所述行扫描动作,直至确定出所述第一参考区域;重新选择后的第一起始区域与重新选择前的第一起始区域在列 方向上的间距为所述扫描窗口在列方向上的高度的整数倍;If the definition of each of the line acquisition areas in the line scanning area does not reach the specified threshold, then reselect the first starting area from the areas where the definition has never been obtained, and perform the line scanning action until it is determined Out of the first reference area; the distance in the column direction between the reselected first starting area and the first starting area before reselecting is an integer multiple of the height of the scan window in the column direction;根据所述第一参考区域确定第二起始区域,通过所述扫描窗口沿列方向对所述初始图像进行扫描,得到列扫描区域和所述列扫描区域中每个列采集区的清晰度;所述列扫描区域在列方向上的边界与所述初始图像在列方向上的边界重合;所述列采集区为所述列扫描区域中对应所述扫描窗口的区域;Determining a second starting area according to the first reference area, and scanning the initial image in a column direction through the scanning window to obtain a column scanning area and the definition of each column acquisition area in the column scanning area; The boundary of the column scanning area in the column direction coincides with the boundary of the initial image in the column direction; the column acquisition area is an area in the column scanning area corresponding to the scanning window;将所述列扫描区域中清晰度达到指定阈值的列采集区作为第二参考区域;Taking a column acquisition area whose definition reaches a specified threshold in the column scanning area as a second reference area;根据所述第一参考区域和所述第二参考区域确定所述焦点区域。The focus area is determined according to the first reference area and the second reference area.
- 根据权利要求4所述的图像处理方法,其中,若所述第一参考区域为多个,所述焦点区域在行方向上的边界为行方向上最外侧的两个所述第一参考区域的外边界;The image processing method according to claim 4, wherein if there are multiple first reference regions, the boundary of the focus region in the row direction is the outer boundary of the two outermost first reference regions in the row direction ;若所述第二参考区域为多个,所述焦点区域在列方向上的边界为列方向上最外侧的两个所述第二参考区域的外边界。If there are multiple second reference areas, the boundary of the focus area in the column direction is the outer boundary of the two outermost second reference areas in the column direction.
- 根据权利要求4所述的图像处理方法,其中,若所述第一参考区域的数量为一个,所述第二起始区域为所述第一参考区域;4. The image processing method according to claim 4, wherein if the number of the first reference area is one, the second starting area is the first reference area;若所述第一参考区域的数量为多个,所述第二起始区域的中心与在行方向上的最外侧的两个所述第一参考区域的中心线的中点重合。If the number of the first reference regions is multiple, the center of the second starting region coincides with the midpoint of the center lines of the two outermost first reference regions in the row direction.
- 根据权利要求3所述的图像处理方法,其中,根据所述剪裁比例和所述焦点区域对所述初始图像进行剪裁,包括:3. The image processing method according to claim 3, wherein the cropping of the initial image according to the cropping ratio and the focus area comprises:根据所述剪裁比例在所述初始图像中生成多个剪裁区域,每个所述剪裁区域的两个相对的边缘与所述初始图像的边缘重合;Generating a plurality of clipping regions in the initial image according to the clipping ratio, and two opposite edges of each clipping region coincide with the edges of the initial image;以与所述焦点区域的中心距最小的剪裁区域,作为目标剪裁区域;Take the clipping area with the smallest distance from the center of the focus area as the target clipping area;沿所述目标剪裁区域的边缘对所述初始图像进行剪裁。The initial image is cropped along the edge of the target cropping area.
- 根据权利要求1所述的图像处理方法,其中,所述图像处理方法还包括:The image processing method according to claim 1, wherein the image processing method further comprises:通过一剪裁终端获取并显示所述初始图像和所述目标图像;Acquiring and displaying the initial image and the target image through a tailoring terminal;接收所述剪裁终端响应用户的操作而生成的剪裁区域参数;Receiving a trimming area parameter generated by the trimming terminal in response to a user's operation;根据所述剪裁区域参数对所述初始图像进行剪裁。The initial image is cropped according to the cropping area parameter.
- 根据权利要求1所述的图像处理方法,其中,所述目标特征包括人脸。The image processing method according to claim 1, wherein the target feature includes a human face.
- 一种显示装置,其中,包括图像处理装置,所述图像处理装置包括:A display device, which includes an image processing device, and the image processing device includes:图像获取单元,用于获取初始图像;The image acquisition unit is used to acquire the initial image;参数获取单元,用于获取剪裁比例和目标特征;The parameter acquisition unit is used to acquire the tailoring ratio and target characteristics;识别单元,用于在所述初始图像中识别目标特征,并根据识别结果确定焦点区域;A recognition unit, configured to recognize the target feature in the initial image, and determine the focus area according to the recognition result;处理单元,用于根据所述剪裁比例和所述焦点区域对所述初始图像进行剪裁,得到并显示目标图像。The processing unit is configured to trim the initial image according to the trimming ratio and the focus area to obtain and display a target image.
- 根据权利要求10所述的显示装置,其中,所述识别单元包括:The display device according to claim 10, wherein the identification unit comprises:图像识别模块,用于在所述初始图像中识别目标特征;An image recognition module for recognizing target features in the initial image;分析模块,用于在所述初始图像中不存在所述目标特征时,以所述初始图像中清晰度达到指定阈值的区域作为焦点区域;以及在所述初始图像中存在所述目标特征时,根据所述目标特征在所述初始图像中的占比确定焦点区域。The analysis module is configured to, when the target feature does not exist in the initial image, use an area in the initial image whose definition reaches a specified threshold as the focus area; and when the target feature exists in the initial image, The focus area is determined according to the proportion of the target feature in the initial image.
- 根据权利要求11所述的显示装置,其中,所述分析模块包括:The display device according to claim 11, wherein the analysis module comprises:比较电路,用于将所述目标特征在所述初始图像中的占比与一阈值范围比较;A comparison circuit for comparing the proportion of the target feature in the initial image with a threshold range;执行电路,用于在所述目标特征在所述初始图像中的占比大于所述阈值范围的最大值时,以所述初始图像作为焦点区域;在所述目标特征在所述初始图像中的占比位于所述阈值范围内时,以所述目标特征的区域作为焦点区域;以及在所述目标特征在所述初始图像中的占比小于所述阈值范围的最小值时,以所述初始图像中清晰度达到指定阈值的区域作为焦点区域。The execution circuit is configured to use the initial image as the focus area when the proportion of the target feature in the initial image is greater than the maximum value of the threshold range; when the target feature is in the initial image When the proportion is within the threshold range, the area of the target feature is used as the focus area; and when the proportion of the target feature in the initial image is less than the minimum value of the threshold range, the initial The area in the image whose sharpness reaches the specified threshold is regarded as the focus area.
- 一种可读存储介质,其上存储有计算机程序,其中,所述计算机程序被执行时实现权利要求1-9任一项所述的图像处理方法。A readable storage medium having a computer program stored thereon, wherein the computer program implements the image processing method according to any one of claims 1-9 when the computer program is executed.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010354667.7 | 2020-04-29 | ||
CN202010354667.7A CN111583273A (en) | 2020-04-29 | 2020-04-29 | Readable storage medium, display device and image processing method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021218416A1 true WO2021218416A1 (en) | 2021-11-04 |
Family
ID=72116950
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/079894 WO2021218416A1 (en) | 2020-04-29 | 2021-03-10 | Readable storage medium, display device and image processing method therefor |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111583273A (en) |
WO (1) | WO2021218416A1 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111583273A (en) * | 2020-04-29 | 2020-08-25 | 京东方科技集团股份有限公司 | Readable storage medium, display device and image processing method thereof |
CN113592874A (en) * | 2020-04-30 | 2021-11-02 | 杭州海康威视数字技术股份有限公司 | Image display method and device and computer equipment |
CN112087579B (en) * | 2020-09-17 | 2022-08-12 | 维沃移动通信有限公司 | Video shooting method and device and electronic equipment |
CN112532785B (en) * | 2020-11-23 | 2022-02-01 | 上海米哈游天命科技有限公司 | Image display method, image display device, electronic apparatus, and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101685620A (en) * | 2008-09-26 | 2010-03-31 | 鸿富锦精密工业(深圳)有限公司 | Display device capable of adjusting image and image adjusting method |
US20140176612A1 (en) * | 2012-12-26 | 2014-06-26 | Canon Kabushiki Kaisha | Image processing apparatus, image capturing apparatus, image processing method, and storage medium |
CN108122238A (en) * | 2018-01-30 | 2018-06-05 | 百度在线网络技术(北京)有限公司 | Image processing method, device, equipment and computer readable storage medium |
CN109523503A (en) * | 2018-09-11 | 2019-03-26 | 北京三快在线科技有限公司 | A kind of method and apparatus of image cropping |
CN110298380A (en) * | 2019-05-22 | 2019-10-01 | 北京达佳互联信息技术有限公司 | Image processing method, device and electronic equipment |
CN110378312A (en) * | 2019-07-26 | 2019-10-25 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN111583273A (en) * | 2020-04-29 | 2020-08-25 | 京东方科技集团股份有限公司 | Readable storage medium, display device and image processing method thereof |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106454105A (en) * | 2016-10-28 | 2017-02-22 | 努比亚技术有限公司 | Device and method for image processing |
CN108776970B (en) * | 2018-06-12 | 2021-01-12 | 北京字节跳动网络技术有限公司 | Image processing method and device |
-
2020
- 2020-04-29 CN CN202010354667.7A patent/CN111583273A/en active Pending
-
2021
- 2021-03-10 WO PCT/CN2021/079894 patent/WO2021218416A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101685620A (en) * | 2008-09-26 | 2010-03-31 | 鸿富锦精密工业(深圳)有限公司 | Display device capable of adjusting image and image adjusting method |
US20140176612A1 (en) * | 2012-12-26 | 2014-06-26 | Canon Kabushiki Kaisha | Image processing apparatus, image capturing apparatus, image processing method, and storage medium |
CN108122238A (en) * | 2018-01-30 | 2018-06-05 | 百度在线网络技术(北京)有限公司 | Image processing method, device, equipment and computer readable storage medium |
CN109523503A (en) * | 2018-09-11 | 2019-03-26 | 北京三快在线科技有限公司 | A kind of method and apparatus of image cropping |
CN110298380A (en) * | 2019-05-22 | 2019-10-01 | 北京达佳互联信息技术有限公司 | Image processing method, device and electronic equipment |
CN110378312A (en) * | 2019-07-26 | 2019-10-25 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN111583273A (en) * | 2020-04-29 | 2020-08-25 | 京东方科技集团股份有限公司 | Readable storage medium, display device and image processing method thereof |
Also Published As
Publication number | Publication date |
---|---|
CN111583273A (en) | 2020-08-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021218416A1 (en) | Readable storage medium, display device and image processing method therefor | |
WO2020177583A1 (en) | Image cropping method and electronic device | |
CN108933915B (en) | Video conference device and video conference management method | |
CN105874776B (en) | Image processing apparatus and method | |
WO2017071062A1 (en) | Area extracting method and apparatus | |
WO2017031901A1 (en) | Human-face recognition method and apparatus, and terminal | |
US11481975B2 (en) | Image processing method and apparatus, electronic device, and computer-readable storage medium | |
WO2021031609A1 (en) | Living body detection method and device, electronic apparatus and storage medium | |
US10003785B2 (en) | Method and apparatus for generating images | |
CN108024065B (en) | Terminal shooting method, terminal and computer readable storage medium | |
US20220166930A1 (en) | Method and device for focusing on target subject, and electronic device | |
EP3687157A1 (en) | Method for capturing images and electronic device | |
KR101620933B1 (en) | Method and apparatus for providing a mechanism for gesture recognition | |
KR102559859B1 (en) | Object segmentation in a sequence of color image frames based on adaptive foreground mask upsampling | |
CN107566742B (en) | Shooting method, shooting device, storage medium and electronic equipment | |
WO2021208875A1 (en) | Visual detection method and visual detection apparatus | |
WO2021212810A1 (en) | Image processing method and apparatus, electronic device, and storage medium | |
CN107622497B (en) | Image cropping method and device, computer readable storage medium and computer equipment | |
US10621730B2 (en) | Missing feet recovery of a human object from an image sequence based on ground plane detection | |
CN105654039A (en) | Image processing method and device | |
CN112017137A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
TW201502998A (en) | Image recognizing method, apparatus, terminal apparatus and server | |
CN113711123A (en) | Focusing method and device and electronic equipment | |
JP2014027355A (en) | Object retrieval device, method, and program which use plural images | |
US20150009314A1 (en) | Electronic device and eye region detection method in electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21796678 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21796678 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21796678 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 29.06.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21796678 Country of ref document: EP Kind code of ref document: A1 |