CN108235816A - Image recognition method, system, electronic device and computer program product - Google Patents

Image recognition method, system, electronic device and computer program product Download PDF

Info

Publication number
CN108235816A
CN108235816A CN201880000060.XA CN201880000060A CN108235816A CN 108235816 A CN108235816 A CN 108235816A CN 201880000060 A CN201880000060 A CN 201880000060A CN 108235816 A CN108235816 A CN 108235816A
Authority
CN
China
Prior art keywords
image
scene
identification
focal length
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201880000060.XA
Other languages
Chinese (zh)
Other versions
CN108235816B (en
Inventor
刘兆祥
廉士国
王敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shanghai Robotics Co Ltd
Original Assignee
Cloudminds Shenzhen Robotics Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Shenzhen Robotics Systems Co Ltd filed Critical Cloudminds Shenzhen Robotics Systems Co Ltd
Publication of CN108235816A publication Critical patent/CN108235816A/en
Application granted granted Critical
Publication of CN108235816B publication Critical patent/CN108235816B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

An image recognition method, a system, an electronic device and a computer program product are applied to the technical field of image recognition, and the method adopts a middle focal length to collect a scene image where a recognition object is located; determining an identification focal distance according to the scene image; and identifying the identification object by adopting the identification focal distance. The identification focal distance is dynamically determined based on the scene where the identification object is located, the identification object is identified by the identification focal distance, the environment is automatically identified without intervention and input of a user, and the proper focal distance is selected based on the environment to achieve the best shooting effect, so that the identification accuracy is improved, and the convenience of life of the blind is greatly improved.

Description

Image recognition method, system, electronic device and computer program product
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to an image recognition method, an image recognition system, an electronic device, and a computer program product.
Background
China is the country with the most blind people in the world, and as a special population in the social group, the people live in borderless darkness for the whole life, so various problems are often encountered.
The intelligent image recognition based on the camera can improve the life convenience of the blind, and the quality of the shot image is very important for the subsequent recognition function. The fixed-focus camera can only shoot clear images within a certain depth of field range, and the application range is limited; the auto-focus camera is often out of focus under the condition that the user is not intervened, so that the shot image cannot be subjected to subsequent fine image recognition.
Disclosure of Invention
The embodiment of the application provides an image identification method, an image identification system, electronic equipment and a computer program product.
In a first aspect, an embodiment of the present application provides an image recognition method, where the method includes:
acquiring a scene image where an identification object is positioned by adopting a middle focal length;
determining an identification focal distance according to the scene image;
and identifying the identification object by adopting the identification focal distance.
In a second aspect, an embodiment of the present application provides an electronic device, including:
a memory, one or more processors; the memory is connected with the processor through a communication bus; the processor is configured to execute instructions in the memory; the storage medium has stored therein instructions for carrying out the steps of the method of the first aspect.
In a third aspect, the present application provides a computer program product for use in conjunction with an electronic device including a display, the computer program product including a computer-readable storage medium and a computer program mechanism embedded therein, the computer program mechanism including instructions for performing the steps of the method of the first aspect.
In a fourth aspect, an embodiment of the present application provides an image recognition system, including: an image acquisition unit and a mobile computing processing unit;
the image acquisition unit is a camera with a controllable focal length, wherein the controllable focal length range comprises a long focal length, a middle focal length and a short focal length; or,
the image acquisition unit comprises more than three fixed-focus cameras, wherein at least one long-focus camera, at least one middle-focus camera and at least one short-focus camera are arranged;
the mobile computing processing unit is the electronic device of the second aspect;
the mobile computing processing unit is connected with the image acquisition unit through a Universal Serial Bus (USB) or a wireless communication mode.
The beneficial effects are as follows:
in the embodiment of the application, the scene image where the identification object is located is collected by the medium focus focal length, the identification focal length is determined according to the scene image, the identification object is identified by the identification focal length, the environment is automatically identified under the condition of no user intervention and input, the proper focal length is selected based on the environment to achieve the best shooting effect, the identification accuracy is further improved, and the convenience of life of the blind is greatly improved.
Drawings
Specific embodiments of the present application will be described below with reference to the accompanying drawings, in which:
fig. 1 is a schematic structural diagram of an electronic device in an embodiment of the present application;
FIG. 2 is a schematic flow chart of an image recognition method in an embodiment of the present application;
fig. 3 is a schematic diagram of an image recognition method in an embodiment of the present application.
Detailed Description
In order to make the technical solutions and advantages of the present application more apparent, the following further detailed description of the exemplary embodiments of the present application with reference to the accompanying drawings makes it clear that the described embodiments are only a part of the embodiments of the present application, and not an exhaustive list of all embodiments. And the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
The intelligent image recognition based on the camera can improve the life convenience of the blind, and the quality of the shot image is very important for the subsequent recognition function. The fixed-focus camera can only shoot clear images within a certain depth of field range, and the application range is limited; the auto-focus camera is often out of focus under the condition that the user is not intervened, so that the shot image cannot be subjected to subsequent fine image recognition.
In order to improve the image shooting quality and improve the living convenience of the blind. The embodiment of the application provides an image identification method, which is characterized in that a mid-focus focal length is adopted to collect a scene image where an identification object is located, the identification focal length is determined according to the scene image, the identification focal length is adopted to identify the identification object, the environment is automatically identified under the condition of no user intervention and input, the proper focal length is selected based on the environment to achieve the best shooting effect, the identification accuracy is further improved, and the convenience of life of blind people is greatly improved.
The image recognition method provided by the application can be used in the following image recognition system. The image recognition system includes: an image acquisition unit and a mobile computing processing unit.
1. Image acquisition unit
The image acquisition unit is used for acquiring an image under the current focal length. However, the focal length of the image acquisition unit in the application can be controlled and modified by the mobile computing processing unit, and the adjustable range of the focal length is large, such as a traffic light or a traffic sign board which can clearly shoot characters of several centimeters to dozens of meters. Therefore, the image capturing unit in the present application may be implemented in various forms.
For example: the image acquisition unit is a camera with controllable focal length, wherein the controllable range of the focal length comprises a long-focus focal length, a middle-focus focal length and a short-focus focal length. In this case, the number of cameras is not limited, but the focal length of the cameras is controllable by at least 1 camera.
For another example, the image acquisition unit is three or more fixed-focus cameras, wherein at least one long-focus camera, at least one middle-focus camera and at least one short-focus camera are arranged.
In practical applications, the image acquisition unit may be located on wearable glasses, such as on blind-guide glasses.
2. Mobile computing processing unit
The mobile computing processing unit may be connected to the image capturing unit through a USB (Universal Serial Bus), or a wireless communication method (such as a bluetooth method).
The mobile computing processing unit is responsible for controlling the focal length, image acquisition, scene rough classification, specific image recognition and voice broadcast output of the image acquisition unit.
For example, the mobile computing processing unit may control the focal length of the image acquiring unit to be the middle focal length, and acquire the scene image where the identification object is located through the image acquiring unit with the middle focal length.
For another example, the mobile technology processing unit may further control the focal length of the image acquisition unit to be the identification focal length, and acquire the first image of the identification object through the image acquisition unit with the focal length being the identification focal length.
For another example, the moving technology processing unit may control the focal length of the image capturing unit to be the middle focus focal length after determining that the recognition is completed.
The mobile computing processing unit may be an electronic device as shown in fig. 1 when applied specifically. The electronic device may be a general-purpose smart phone. The electronic device includes: memory 101, one or more processors 102; the memory, the processor and the transceiver component 103 are connected through a communication bus (in the embodiment of the present application, the communication bus is used as an I/O bus for explanation); the storage medium stores instructions for executing all steps in the image recognition method shown in fig. 2, so that a scene image where a recognition object is located is acquired by adopting a medium focus focal length, the recognition focal length is determined according to the scene image, and the recognition object is recognized by adopting the recognition focal length, so that the environment is automatically recognized without user intervention and input, a proper focal length is selected based on the environment to achieve the best shooting effect, the recognition accuracy is improved, and the convenience of life of the blind is greatly improved.
It will be appreciated that in practice, the above-described transceiver component 103 need not necessarily be included for the purpose of achieving the basic objectives of the present application.
Referring to fig. 2, the image recognition method provided in this embodiment includes:
and 201, acquiring a scene image where the identification object is located by adopting a middle focal length.
201-1, the mobile computing processing unit establishes connection with the image acquisition unit through a wireless communication mode such as USB or Bluetooth.
201-2, the mobile computing processing unit adopts the middle focal length to collect the scene image of the identified object through the connection.
The method specifically comprises the following steps:
1) if the image acquisition unit is a camera with controllable focal length, then
(1) The mobile computing processing unit adjusts the focal length of the image acquisition unit to be at the intermediate distance through the connection, and the current focal length of the image acquisition unit is the intermediate focus focal length at the moment.
(2) The image acquisition unit acquires a scene image where the identification object is located under the current focal length, and transmits the scene image to the mobile computing processing unit, so that the mobile computing processing unit acquires the scene image where the identification object is located through the image acquisition unit with the focal length being the middle focal length.
2) If the image acquisition unit is more than three fixed-focus cameras, wherein at least one long-focus camera, at least one middle-focus camera and at least one short-focus camera are arranged, then
(1) The mobile computing processing unit selects a middle-focus camera in the image acquisition unit through the connection.
(2) And the selected middle-focus camera collects the scene image of the identification object, and transmits the scene image to the mobile computing processing unit, so that the mobile computing processing unit collects the scene image of the identification object through the image acquisition unit with the focus as the middle-focus.
From the scene image, an identification focal distance is determined 202.
After the scene images are acquired in step 201, the scene images are roughly classified to obtain roughly classified scenes, that is, the rough scenes to which the scene images belong. And determining the identification focal distance during specific identification according to the coarse scene to adjust the lens to the corresponding identification focal distance, acquiring an image, and calling a corresponding image identification function.
The specific implementation process of step 202 is as follows:
202-1, roughly classifying the scene images through a scene rough classification model, and determining the rough scene of the scene images.
Wherein, the coarse scene is a long-focus scene, a medium-focus scene or a short-focus scene. The coarse scene corresponds to one or more image recognition functions. The image recognition functions corresponding to different coarse scenes are different, and the number of the corresponding image recognition functions can be the same or different.
For example, in the case where the coarse scene is a tele scene, only one image recognition function corresponding to the tele scene is provided, that is, a traffic light recognition function.
For another example, in the case where the coarse scene is a short-focus scene, the image recognition functions corresponding to the short-focus scene are two, that is, the book reading recognition function and the article recognition function.
For another example, in the case where the coarse scene is the middle-focus scene, the number of image recognition functions corresponding to the middle-focus scene is multiple, and one of the image recognition functions is a face recognition function.
The coarse scenes in this embodiment may be added according to actual conditions, in addition to the long-focus scenes, the medium-focus scenes, or the short-focus scenes. The number and specific functions of the image recognition functions corresponding to each coarse scene can be adjusted according to actual conditions. In this embodiment, the specific category included in the coarse scene, the specific number and the specific function of the image recognition functions corresponding to the coarse scene, the category included in the coarse scene, the number of the corresponding image recognition functions, the adjustment time of the functions, and the adjustment form are not limited.
The scene rough classification model is obtained by performing deep learning on a sample of a long-focus scene, a sample of a middle-focus scene, and a sample of a short-focus scene.
In particular, the method comprises the following steps of,
1. acquiring image samples under three scenes based on a middle focus lens, for example, identifying scenes for traffic lights when a long focus scene is used, and acquiring images of the scenes as samples of the scene category; when the middle-focus scene is used, the scene is generally identified for the human face, and a scene image which is close to the pedestrian and is in the front can be collected as a sample of the scene category; when short focus scenes are used, typically OCR (Optical Character Recognition), to identify the scene, an image of the scene, such as a reading, may be taken as a sample of the scene class.
2) Training is performed based on CNN (Convolutional Neural Network), such as training using a resnet Network.
After the training is finished, a trained scene rough classification model is obtained, in step 202-1, the trained scene rough classification model and the weights can be used for classifying the scene images, which rough scene belongs to is judged according to the output probability, and rough classification and identification of the scene images acquired in step 201 by adopting CNN based on deep learning are realized.
And 202-2, determining the focal distance corresponding to the coarse scene as the identification focal distance.
The method is executed, the identification focal length when the identification object is actually identified is obtained, the function of dynamically changing the focal length based on different coarse scenes is achieved, the image shooting quality during subsequent identification is improved, the identification accuracy is further improved, and the convenience of life of the blind is greatly improved.
And 203, identifying the identification object by using the identification focal distance.
203-1, acquiring a first image of the identified object using the identification focal distance.
The mobile technology processing unit controls the focal length of the image acquisition unit to be an identification focal length, and the image acquisition unit with the focal length being the identification focal length acquires a first image of an identification object.
The method specifically comprises the following steps:
1) if the image acquisition unit is a camera with controllable focal length, then
(1) The mobile computing processing unit adjusts the focal length of the image acquisition unit to the identification focal length distance through the connection established with the image acquisition unit, and the current focal length of the image acquisition unit is the identification focal length at the moment.
(2) The image acquisition unit acquires a scene image where the identification object is located under the current focal length, and transmits the scene image to the mobile computing processing unit, so that the mobile computing processing unit acquires a first image of the identification object through the image acquisition unit with the focal length as the identification focal length.
2) If the image acquisition unit is more than three fixed-focus cameras, wherein at least one long-focus camera, at least one middle-focus camera and at least one short-focus camera are arranged, then
(1) The mobile computing processing unit selects a camera corresponding to the identification focal length in the image acquisition unit through the connection established with the image acquisition unit.
(2) The selected camera collects a scene image of the identification object, and transmits the scene image to the mobile computing processing unit, so that the mobile computing processing unit collects a first image of the identification object through the image acquisition unit with the focal length as the identification focal length.
203-2, if the coarse scene corresponds to an image recognition function, calling the image recognition function corresponding to the coarse scene to recognize the first image.
If the coarse scene has only one image recognition function, for example, the coarse scene is a long-focus scene, and the image recognition function corresponding to the long-focus scene has only one traffic light recognition function, the recognition functions such as red and green can be directly called.
Calling a traffic light identification function, and determining that an identification object is a red light, a green light or a yellow light according to the first image; or calling a traffic light identification function, and determining that the identification object is a red light, a green light, a yellow light or a non-traffic-yellow light according to the first image.
For example, in the traffic light recognition function, the first image is recognized by the deep neural network to realize the discrimination of the red light, the green light and the yellow light, or the first image is recognized by the deep neural network to realize the discrimination of the red light, the green light, the yellow light or the non-traffic yellow light.
The specific implementation process of this scheme may refer to the implementation process of roughly classifying the scene image in step 202-1.
For another example, in the traffic light recognition function, three targets, i.e., a red light, a green light, and a yellow light, in the first image are detected in a target detection manner, or four targets, i.e., a red light, a green light, a yellow light, and a non-traffic light, in the first image are detected in a target detection manner.
The target detection method includes, but is not limited to, detection based on an ssd (single Shot multi box detector) target detection model.
203-3, if the coarse scene corresponds to a plurality of image identification functions, performing fine classification on the first image through a scene fine classification model, and determining the image identification function corresponding to the first image in the coarse scene; and calling an image recognition function corresponding to the first image in the coarse scene to recognize the first image.
If the coarse scene corresponds to a plurality of subdivided image recognition functions, the scene subdivision can be firstly classified, and then the specific image recognition function is called.
For example, if the coarse scene is a short-focus scene, the image recognition functions corresponding to the short-focus scene are two: a reading identification function and an article identification function. At this time, it may be determined, through the convolutional neural network or CNN, whether the scene is an OCR scene or an object recognition scene based on what is held in the hand (the specific implementation manner may refer to the implementation process of roughly classifying the scene image in step 202-1), and if the scene is the OCR scene, the reading recognition function is invoked, otherwise, the item recognition function is invoked.
In particular, the method comprises the following steps of,
and 1, identifying the first image through a scene subdivision classification model, and determining that an identified object is a book, a newspaper or an article other than the book and the newspaper.
The newspapers and periodicals comprise publications such as newspapers and magazines.
And 2, if the recognition object is a book or if the recognition object is a newspaper, determining that the image recognition function corresponding to the first image in the coarse scene to which the first image belongs is a reading recognition function, and calling the reading recognition function to recognize the first image, namely performing OCR recognition.
After OCR recognition, the recognition result can be output in a voice broadcast mode.
In practical application, the process of recognizing the first image by calling the reading recognition function can be optimized as follows:
and after completing the output of the OCR recognition once, acquiring the image of the recognition object again to analyze whether the image needs to be recognized again. Because the blind person can move the book up, down, left, right, and front and back when reading the book, under the condition, the recognition is not needed again, and only when the user turns pages, the OCR recognition is needed again, so that the repeated broadcasting from the beginning can be avoided, and the user experience is effectively improved.
The concrete implementation is as follows:
while the reading recognition function is called to recognize the first image, a second image of the recognition object is continuously acquired. And when the content similarity is lower than the first threshold value, stopping the continuous acquisition of the second image, namely, taking the second image as a new first image, re-executing the step of calling the reading recognition function to recognize the new first image, simultaneously, continuously acquiring the new second image of the recognition object, determining the content similarity between the new second image and the first image, and if the content similarity is lower than the first threshold value, stopping the continuous acquisition of the second image, performing the recognition of the OCR again, continuously acquiring the new second image of the recognition object, determining the content similarity between the new second image and the first image, and the subsequent steps, and repeating the steps until the recognition object is recognized.
For example, the image during the first frame of OCR recognition is firstly saved, then the image is continuously collected to perform content similarity judgment with the image, if the similarity is lower than a set threshold, it is considered that the user may have turned the page and needs to perform OCR recognition again, otherwise, it is considered that the user is moving the current page and does not need OCR recognition again.
The process of determining the content similarity between the second image and the first image can be realized by means of feature point matching.
Specifically, second feature points in the second image are extracted, first feature points in the first image are extracted, and content similarity between the second image and the first image is determined according to the number of the second feature points matched with the first feature points.
For example, ORB/SIFT points of the first frame image and the current frame image are respectively extracted for matching, and the number of successfully matched points is determined, and the more successfully matched points are, the higher the similarity is.
And 3, if the identification object is an article which is not a book or a newspaper, determining that the image identification function corresponding to the first image in the coarse scene to which the first image belongs is an article identification function, and calling the article identification function to identify the first image.
Executing so far, finishing the accurate identification of the identification object.
After the recognition, the image recognition method provided by this embodiment further determines whether the image recognition function in the scene is used up, that is, whether the recognition is completed. If the identification is completed, the mobile computing processing unit readjusts the focal length of the image acquisition unit (such as a camera) to the middle focus position, namely, controls the focal length of the image acquisition unit to be the middle focus focal length, and enters the next working cycle.
The method specifically comprises the following steps:
1) if the image acquisition unit is a camera with controllable focal length, the mobile computing processing unit adjusts the focal length of the image acquisition unit to be at the middle distance through the connection established with the image acquisition unit, and at the moment, the current focal length of the image acquisition unit is the middle-focus focal length, so that the work of adjusting the focal length of the image acquisition unit to the middle-focus position is completed.
2) If the image acquisition unit is more than three fixed-focus cameras, wherein at least one long-focus camera, at least one middle-focus camera and at least one short-focus camera are connected with the image acquisition unit, the mobile computing processing unit selects the middle-focus camera as a default camera for next photographing in the image acquisition unit through the connection established with the image acquisition unit, and the work of adjusting the focal length of the image acquisition unit to the middle-focus position is completed.
In addition, the manner of determining whether to identify completion includes, but is not limited to: and continuously acquiring images for processing, and judging whether the current identification function is finished according to the output result of the image identification function in the scene.
For example, for a traffic light identification function in a long-focus scene, if the probability of the fourth category (non-red, green, and yellow lights) is highest, the function is considered to be ended.
Specifically, after the traffic light identification function is called, and the identification object is determined to be a red light, a green light, a yellow light or a non-red-green-yellow light according to the first image, if the identification object is determined to be a non-red-green-yellow light according to the first image, the identification is determined to be completed, and the image identification method is ended. And if the identification object is determined to be a red light, a green light or a yellow light according to the first image, continuously acquiring a second image of the identification object, calling a traffic light identification function whenever one second image is acquired, determining that the identification object is the red light, the green light, the yellow light or a non-red-green-yellow light according to the second image, determining that the identification is finished if the identification object is determined to be the non-red-green-yellow light according to the second image, stopping the continuous acquisition of the second image, and finishing the image identification method.
For another example, the reading recognition function in the short focus scene may be determined according to the number and confidence level of the output OCR characters, and if there are few characters or the confidence level of the characters is low, the reading recognition function is considered to be finished.
Specifically, after the second images of the recognition object are continuously acquired, the reading recognition function is called to recognize the number of characters and the confidence of the characters in each second image every time one second image is acquired. And if the number of the characters in the second image is smaller than a second threshold value and/or the confidence coefficient of the characters in the second image is smaller than a third threshold value, determining that the recognition is finished, stopping continuous acquisition of the second image, and ending the image recognition method.
For another example, for the face recognition function in the mid-focus scene, human body/face detection may be performed to determine whether there is a human body/face in the scene that meets a certain size, and if not, the face recognition function is considered to be ended.
Specifically, after calling an image recognition function corresponding to the first image in the coarse scene, recognizing the first image, continuously acquiring a second image of the recognition object, detecting a human body or a human face in the second image every time when acquiring one second image, determining that the recognition is finished if the detection result is that the human body size is smaller than a fourth threshold value or the detection result is that the human face size is smaller than a fifth threshold value, stopping the continuous acquisition of the second image, and ending the image recognition method.
Executing this, the image recognition method provided by the present embodiment is completed. The basic idea of the method is shown in fig. 3, and specifically comprises the following steps: the image recognition function required by the blind is divided into a long-focus (corresponding to a long distance), a medium-focus (corresponding to a middle distance) and a short-focus (corresponding to a short distance) use scene according to the use distance, for example, the long-focus shooting is required for traffic light recognition, the medium-focus shooting is required for face recognition, and the short-focus shooting is required for OCR. When the image recognition system works, the mobile computing processing unit controls the position of a focal length of an image acquisition unit (such as a camera with a controllable focal length) in a middle focus to acquire a scene image where a recognition object is located, although the images shot by a short-focus scene and a long-focus scene are not clear enough or the resolution of a target area is not enough under the condition, the method can roughly classify and recognize the images, judge whether the images are suitable for the long-focus scene, the short-focus scene or the middle-focus scene, determine the recognition focal length according to the roughly classified scene type, adjust the camera lens to the corresponding recognition focal length, acquire one image of the recognition object by adopting the recognition focal length again, and call the image recognition function of the corresponding scene; if the scene has a plurality of subdivided functions, the scene can be classified finely, and then a specific image recognition function is called. And finally, the recognition result can be fed back to the blind in a voice broadcasting mode.
Has the advantages that:
according to the embodiment of the application, the scene image where the identification object is located is collected through the middle focal length, the identification focal length is determined according to the scene image, the identification object is identified through the identification focal length, the environment is automatically identified under the condition that no user intervenes or inputs, the proper focal length is selected based on the environment to achieve the best shooting effect, the identification accuracy is further improved, and the convenience of life of the blind is greatly improved.
In another aspect, embodiments of the present application also provide a computer program product for use in conjunction with an electronic device including a display, the computer program product including a computer-readable storage medium and a computer program mechanism embedded therein, the computer program mechanism including instructions for performing the steps of:
acquiring a scene image where an identification object is positioned by adopting a middle focal length;
determining an identification focal distance according to the scene image;
and identifying the identification object by adopting the identification focal distance.
Optionally, determining the recognition focal distance from the scene image includes:
roughly classifying the scene images through a scene rough classification model to determine a rough scene to which the scene images belong;
determining a focal length corresponding to the coarse scene as an identification focal length;
wherein, the coarse scene corresponds to one or more image recognition functions;
the coarse scene is a long-focus scene, a medium-focus scene or a short-focus scene;
the scene rough classification model is obtained by deep learning of a sample of a long-focus scene, a sample of a middle-focus scene and a sample of a short-focus scene.
Optionally, identifying the identification object by using the identification focal distance includes:
acquiring a first image of an identification object by adopting an identification focal length;
if the coarse scene corresponds to an image recognition function, calling the image recognition function corresponding to the coarse scene to recognize the first image;
if the coarse scene corresponds to a plurality of image identification functions, performing fine classification on the first image through a scene fine classification model, and determining the image identification function corresponding to the first image in the coarse scene; and calling an image recognition function corresponding to the first image in the coarse scene to recognize the first image.
Optionally, the coarse scene is a long-focus scene, and the image identification function corresponding to the long-focus scene is a traffic light identification function;
calling an image recognition function corresponding to the coarse scene to recognize the first image, wherein the image recognition function comprises the following steps:
calling a traffic light identification function, and determining that an identification object is a red light, a green light or a yellow light according to the first image; or,
and calling a traffic light identification function, and determining that the identification object is a red light, a green light, a yellow light or a non-traffic yellow light according to the first image.
Optionally, after the traffic light identification function is invoked, and the identification object is determined to be a red light, a green light, a yellow light, or a non-red, green, and yellow light according to the first image, the method further includes:
if the identification object is determined to be a non-red, green and yellow light according to the first image, the identification is determined to be finished, and the image identification method is ended;
and if the identification object is determined to be a red light, a green light or a yellow light according to the first image, continuously acquiring a second image of the identification object, calling a traffic light identification function whenever one second image is acquired, determining that the identification object is the red light, the green light, the yellow light or a non-red-green-yellow light according to the second image, determining that the identification is finished if the identification object is determined to be the non-red-green-yellow light according to the second image, stopping the continuous acquisition of the second image, and finishing the image identification method.
Optionally, the coarse scene is a short-focus scene, and the image identification function corresponding to the short-focus scene is a reading identification function and an article identification function;
the method for finely classifying the first image through the scene fine classification model and determining the image recognition function of the first image in the coarse scene comprises the following steps:
identifying the first image through a scene subdivision model, and determining that an identified object is a book, a newspaper or an article other than the book and the newspaper;
if the recognition object is a book or if the recognition object is a newspaper, determining that the image recognition function corresponding to the first image in the coarse scene to which the first image belongs is a reading recognition function;
and if the identification object is an article which is not a book or a periodical, determining that the image identification function corresponding to the first image in the coarse scene to which the first image belongs is an article identification function.
Optionally, the image recognition function corresponding to the first image in the coarse scene is a reading recognition function;
calling an image recognition function corresponding to the first image in the coarse scene to recognize the first image, wherein the image recognition function comprises the following steps:
calling a reading identification function to identify a first image, and simultaneously, continuously acquiring a second image of an identification object;
and when the content similarity is lower than a first threshold value, stopping continuous acquisition of the second images, using the second images as new first images, re-executing the steps of calling the reading recognition function to recognize the new first images, continuously acquiring the new second images of the recognition object, determining the content similarity between the new second images and the first images and the subsequent steps.
Optionally, determining the content similarity between the second image and the first image comprises:
extracting second characteristic points in the second image and extracting first characteristic points in the first image;
and determining the content similarity between the second image and the first image according to the number of the second feature points matched with the first feature points.
Optionally, after continuously acquiring the second image of the identified object, the method further includes:
when a second image is acquired, calling a reading recognition function to recognize the number and the confidence degree of characters in the second image;
and if the number of the characters in the second image is smaller than a second threshold value and/or the confidence coefficient of the characters in the second image is smaller than a third threshold value, determining that the recognition is finished, stopping continuous acquisition of the second image, and ending the image recognition method.
Optionally, the image recognition function of the first image in the coarse scene is a face recognition function in a middle-focus scene;
calling an image recognition function corresponding to the first image in the coarse scene to which the first image belongs, and after the first image is recognized, further comprising:
continuously acquiring second images of the recognition object;
detecting a human body or a human face in a second image every time when the second image is acquired;
and if the detection result is that no human body exists and no human face exists, or the detection result is that the size of the human body is smaller than a fourth threshold value, or the detection result is that the size of the human face is smaller than a fifth threshold value, determining that the identification is finished, stopping continuous acquisition of the second image, and finishing the image identification method.
Has the advantages that:
according to the embodiment of the application, the scene image where the identification object is located is collected through the middle focal length, the identification focal length is determined according to the scene image, the identification object is identified through the identification focal length, the environment is automatically identified under the condition that no user intervenes or inputs, the proper focal length is selected based on the environment to achieve the best shooting effect, the identification accuracy is further improved, and the convenience of life of the blind is greatly improved.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.

Claims (18)

1. An image recognition method, characterized in that the method comprises:
acquiring a scene image where an identification object is positioned by adopting a middle focal length;
determining an identification focal distance according to the scene image;
and identifying the identification object by adopting the identification focal distance.
2. The method of claim 1, wherein determining an identification focal distance from the scene image comprises:
roughly classifying the scene images through a scene rough classification model to determine rough scenes to which the scene images belong;
determining a focal length corresponding to the coarse scene as an identification focal length;
wherein, the coarse scene corresponds to one or more image recognition functions;
the coarse scene is a long-focus scene, a medium-focus scene or a short-focus scene;
the scene rough classification model is obtained by deep learning of a sample of a long-focus scene, a sample of a middle-focus scene and a sample of a short-focus scene.
3. The method of claim 2, wherein said identifying the identified object using the identification focal distance comprises:
acquiring a first image of the identification object by adopting the identification focal distance;
if the coarse scene corresponds to an image recognition function, calling the image recognition function corresponding to the coarse scene to recognize the first image;
if the coarse scene corresponds to a plurality of image identification functions, performing fine classification on the first image through a scene fine classification model, and determining the image identification function corresponding to the first image in the coarse scene; and calling an image identification function corresponding to the first image in the coarse scene to identify the first image.
4. The method according to claim 3, wherein the coarse scene is a tele scene, and the image recognition function corresponding to the tele scene is a traffic light recognition function;
the calling an image identification function corresponding to the coarse scene to identify the first image comprises the following steps:
calling a traffic light identification function, and determining that the identification object is a red light, a green light or a yellow light according to the first image; or,
and calling a traffic light identification function, and determining that the identification object is a red light, a green light, a yellow light or a non-traffic yellow light according to the first image.
5. The method of claim 4, wherein after invoking the traffic light identification function and determining that the identified object is a red light, a green light, a yellow light, or a non-red, green, and yellow light based on the first image, further comprising:
if the identification object is determined to be a non-red, green and yellow light according to the first image, determining that the identification is finished, and ending the image identification method;
and if the identification object is determined to be a red light, a green light or a yellow light according to the first image, continuously acquiring a second image of the identification object, calling a traffic light identification function whenever one second image is acquired, determining that the identification object is the red light, the green light, the yellow light or a non-traffic yellow light according to the second image, determining that the identification is finished if the identification object is determined to be the non-traffic yellow light according to the second image, stopping the continuous acquisition of the second image, and finishing the image identification method.
6. The method according to claim 3, wherein the coarse scene is a short-focus scene, and the image recognition functions corresponding to the short-focus scene are a reading recognition function and an article recognition function;
the fine classification of the first image through the scene fine classification model and the determination of the image identification function corresponding to the first image in the coarse scene comprise:
identifying the first image through a scene subdivision classification model, and determining that the identified object is a book, a newspaper or an article other than the book and the newspaper;
if the recognition object is a book or if the recognition object is a newspaper, determining that the image recognition function corresponding to the first image in the rough scene is a reading recognition function;
and if the identification object is an article other than a book or a periodical, determining that the image identification function corresponding to the first image in the coarse scene to which the first image belongs is an article identification function.
7. The method according to claim 6, wherein the corresponding image recognition function of the first image in the coarse scene is a reading recognition function;
the calling of the image identification function corresponding to the first image in the coarse scene to identify the first image comprises:
calling a reading identification function to identify the first image, and simultaneously, continuously acquiring a second image of the identified object;
and when the content similarity is lower than a first threshold value, stopping continuous acquisition of the second images, using the second images as new first images, re-executing the steps of calling a reading identification function to identify the new first images, continuously acquiring the new second images of the identified object, and determining the content similarity between the new second images and the first images and the subsequent steps.
8. The method of claim 7, wherein determining the content similarity between the second image and the first image comprises:
extracting second feature points in the second image and extracting first feature points in the first image;
and determining the content similarity between the second image and the first image according to the number of the second feature points matched with the first feature points.
9. The method of claim 7 or 8, wherein after said continuously acquiring second images of said identified object, further comprising:
when a second image is acquired, calling a reading recognition function to recognize the number and the confidence degree of characters in the second image;
and if the number of the characters in the second image is smaller than a second threshold value and/or the confidence coefficient of the characters in the second image is smaller than a third threshold value, determining that the recognition is finished, stopping continuous acquisition of the second image, and ending the image recognition method.
10. The method according to claim 3, wherein the image recognition function corresponding to the first image in the coarse scene is a face recognition function in a mid-focus scene;
the calling of the image identification function corresponding to the first image in the coarse scene to which the first image belongs further includes, after the identification of the first image:
continuously acquiring second images of the identified object;
detecting a human body or a human face in a second image every time when the second image is acquired;
and if the detection result is that no human body exists and no human face exists, or the detection result is that the size of the human body is smaller than a fourth threshold value, or the detection result is that the size of the human face is smaller than a fifth threshold value, determining that the identification is finished, stopping continuous acquisition of the second image, and finishing the image identification method.
11. An electronic device, characterized in that the electronic device comprises:
a memory, one or more processors; the memory is connected with the processor through a communication bus; the processor is configured to execute instructions in the memory; the storage medium has stored therein instructions for carrying out the steps of the method according to any one of claims 1 to 10.
12. The electronic device of claim 11, wherein the electronic device is a smartphone.
13. A computer program product for use in conjunction with an electronic device including a display, the computer program product comprising a computer readable storage medium and a computer program mechanism embedded therein, the computer program mechanism comprising instructions for carrying out each step of the method according to any one of claims 1 to 10.
14. An image recognition system, characterized in that the image recognition system comprises: an image acquisition unit and a mobile computing processing unit;
the image acquisition unit is a camera with a controllable focal length, wherein the controllable focal length range comprises a long focal length, a middle focal length and a short focal length; or,
the image acquisition unit comprises more than three fixed-focus cameras, wherein at least one long-focus camera, at least one middle-focus camera and at least one short-focus camera are arranged;
the mobile computing processing unit is the electronic device of claim 13 or 14;
the mobile computing processing unit is connected with the image acquisition unit through a Universal Serial Bus (USB) or a wireless communication mode.
15. The system according to claim 14, wherein the wireless communication mode is a bluetooth mode;
the image acquisition unit is located on the wearable glasses.
16. The system of claim 14 or 15,
and the mobile computing processing unit is used for controlling the focal length of the image acquisition unit to be a middle focal length, and acquiring a scene image where the identification object is located by the image acquisition unit with the focal length being the middle focal length.
17. The system of claim 16,
the mobile technology processing unit is further configured to control the focal length of the image acquisition unit to be the identification focal length, and acquire the first image of the identification object through the image acquisition unit with the focal length being the identification focal length.
18. The system of claim 17,
the mobile technology processing unit is further used for controlling the focal length of the image acquisition unit to be the intermediate focal length after the identification is determined to be completed.
CN201880000060.XA 2018-01-10 2018-01-10 Image recognition method, system, electronic device and computer program product Active CN108235816B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/072111 WO2019136636A1 (en) 2018-01-10 2018-01-10 Image recognition method and system, electronic device, and computer program product

Publications (2)

Publication Number Publication Date
CN108235816A true CN108235816A (en) 2018-06-29
CN108235816B CN108235816B (en) 2020-10-16

Family

ID=62657703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880000060.XA Active CN108235816B (en) 2018-01-10 2018-01-10 Image recognition method, system, electronic device and computer program product

Country Status (2)

Country Link
CN (1) CN108235816B (en)
WO (1) WO2019136636A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109949359A (en) * 2019-02-14 2019-06-28 深兰科技(上海)有限公司 A kind of method and apparatus carrying out target detection based on SSD model
CN110059678A (en) * 2019-04-17 2019-07-26 上海肇观电子科技有限公司 A kind of detection method, device and computer readable storage medium
CN110213491A (en) * 2019-06-26 2019-09-06 Oppo广东移动通信有限公司 A kind of focalization method, device and storage medium
CN110232313A (en) * 2019-04-28 2019-09-13 南京览视医疗科技有限公司 A kind of eye recommended method, system, electronic equipment and storage medium
CN110782692A (en) * 2019-10-31 2020-02-11 青岛海信网络科技股份有限公司 Signal lamp fault detection method and system
CN110798608A (en) * 2018-08-02 2020-02-14 北京京东尚科信息技术有限公司 Method and device for identifying image
CN112560840A (en) * 2018-09-20 2021-03-26 西安艾润物联网技术服务有限责任公司 Method for identifying multiple identification areas, identification terminal and readable storage medium
CN112601025A (en) * 2020-12-24 2021-04-02 深圳集智数字科技有限公司 Image acquisition method and device, and computer readable storage medium of equipment
CN114666501A (en) * 2022-03-17 2022-06-24 深圳市百泰实业股份有限公司 Intelligent control method for camera of wearable device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111061898A (en) * 2019-12-13 2020-04-24 Oppo(重庆)智能科技有限公司 Image processing method, image processing device, computer equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101714262A (en) * 2009-12-10 2010-05-26 北京大学 Method for reconstructing three-dimensional scene of single image
CN101783882A (en) * 2009-01-15 2010-07-21 华晶科技股份有限公司 Method and image capturing device for automatically determining scenario mode
US20110063494A1 (en) * 2009-09-16 2011-03-17 Altek Corporation Continuous focusing method for digital camera
CN103197491A (en) * 2013-03-28 2013-07-10 华为技术有限公司 Method capable of achieving rapid automatic focusing and image acquisition device
CN105007431A (en) * 2015-07-03 2015-10-28 广东欧珀移动通信有限公司 Picture shooting method based on various shooting scenes and terminal
CN105122303A (en) * 2011-12-28 2015-12-02 派尔高公司 Camera calibration using feature identification
CN105357526A (en) * 2015-11-13 2016-02-24 西安交通大学 Compressed domain based mobile phone football video quality evaluation device and method considering scene classification
CN106375448A (en) * 2016-09-05 2017-02-01 腾讯科技(深圳)有限公司 Image processing method, device and system
CN107533375A (en) * 2015-06-29 2018-01-02 埃西勒国际通用光学公司 scene image analysis module

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NO327899B1 (en) * 2007-07-13 2009-10-19 Tandberg Telecom As Procedure and system for automatic camera control
JP4894712B2 (en) * 2007-10-17 2012-03-14 ソニー株式会社 Composition determination apparatus, composition determination method, and program
JP5788197B2 (en) * 2011-03-22 2015-09-30 オリンパス株式会社 Image processing apparatus, image processing method, image processing program, and imaging apparatus
US9602728B2 (en) * 2014-06-09 2017-03-21 Qualcomm Incorporated Image capturing parameter adjustment in preview mode

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101783882A (en) * 2009-01-15 2010-07-21 华晶科技股份有限公司 Method and image capturing device for automatically determining scenario mode
US20110063494A1 (en) * 2009-09-16 2011-03-17 Altek Corporation Continuous focusing method for digital camera
CN101714262A (en) * 2009-12-10 2010-05-26 北京大学 Method for reconstructing three-dimensional scene of single image
CN105122303A (en) * 2011-12-28 2015-12-02 派尔高公司 Camera calibration using feature identification
CN103197491A (en) * 2013-03-28 2013-07-10 华为技术有限公司 Method capable of achieving rapid automatic focusing and image acquisition device
CN107533375A (en) * 2015-06-29 2018-01-02 埃西勒国际通用光学公司 scene image analysis module
CN105007431A (en) * 2015-07-03 2015-10-28 广东欧珀移动通信有限公司 Picture shooting method based on various shooting scenes and terminal
CN105357526A (en) * 2015-11-13 2016-02-24 西安交通大学 Compressed domain based mobile phone football video quality evaluation device and method considering scene classification
CN106375448A (en) * 2016-09-05 2017-02-01 腾讯科技(深圳)有限公司 Image processing method, device and system

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110798608A (en) * 2018-08-02 2020-02-14 北京京东尚科信息技术有限公司 Method and device for identifying image
CN112560840A (en) * 2018-09-20 2021-03-26 西安艾润物联网技术服务有限责任公司 Method for identifying multiple identification areas, identification terminal and readable storage medium
CN112560840B (en) * 2018-09-20 2023-05-12 西安艾润物联网技术服务有限责任公司 Method for identifying multiple identification areas, identification terminal, and readable storage medium
CN109949359A (en) * 2019-02-14 2019-06-28 深兰科技(上海)有限公司 A kind of method and apparatus carrying out target detection based on SSD model
CN110059678A (en) * 2019-04-17 2019-07-26 上海肇观电子科技有限公司 A kind of detection method, device and computer readable storage medium
CN110232313A (en) * 2019-04-28 2019-09-13 南京览视医疗科技有限公司 A kind of eye recommended method, system, electronic equipment and storage medium
CN110213491A (en) * 2019-06-26 2019-09-06 Oppo广东移动通信有限公司 A kind of focalization method, device and storage medium
CN110782692A (en) * 2019-10-31 2020-02-11 青岛海信网络科技股份有限公司 Signal lamp fault detection method and system
CN112601025A (en) * 2020-12-24 2021-04-02 深圳集智数字科技有限公司 Image acquisition method and device, and computer readable storage medium of equipment
CN112601025B (en) * 2020-12-24 2022-07-05 深圳集智数字科技有限公司 Image acquisition method and device, and computer readable storage medium of equipment
CN114666501A (en) * 2022-03-17 2022-06-24 深圳市百泰实业股份有限公司 Intelligent control method for camera of wearable device

Also Published As

Publication number Publication date
WO2019136636A1 (en) 2019-07-18
CN108235816B (en) 2020-10-16

Similar Documents

Publication Publication Date Title
CN108235816B (en) Image recognition method, system, electronic device and computer program product
CN106462766B (en) Image capture parameters adjustment is carried out in preview mode
CN108288027B (en) Image quality detection method, device and equipment
WO2020164282A1 (en) Yolo-based image target recognition method and apparatus, electronic device, and storage medium
US8830357B2 (en) Image processing device and image processing method including a blurring process
CN105426828B (en) Method for detecting human face, apparatus and system
EP2775349B1 (en) A method for determining an in-focus position and a vision inspection system
CN111626371A (en) Image classification method, device and equipment and readable storage medium
CN105224947B (en) classifier training method and system
JP7142420B2 (en) Image processing device, learning method, trained model, image processing method
CN108154102A (en) A kind of traffic sign recognition method
CN104717413A (en) Shooting assistance method and equipment
CN111967319B (en) Living body detection method, device, equipment and storage medium based on infrared and visible light
CN104778474A (en) Classifier construction method for target detection and target detection method
CN104281839A (en) Body posture identification method and device
CN111598065B (en) Depth image acquisition method, living body identification method, apparatus, circuit, and medium
CN105678245A (en) Target position identification method based on Haar features
CN105678242A (en) Focusing method and apparatus in the mode of holding certificate in hands
WO2016028532A1 (en) Apparatus and method for capturing a scene image with text using flash illumination
CN106874825A (en) The training method of Face datection, detection method and device
CN110443181A (en) Face identification method and device
CN108734667B (en) Image processing method and system
Yanagisawa et al. Face detection for comic images with deformable part model
CN108764230A (en) A kind of bank's card number automatic identifying method based on convolutional neural networks
CN110121723B (en) Artificial neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210209

Address after: 201111 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Patentee after: Dalu Robot Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Patentee before: Shenzhen Qianhaida Yunyun Intelligent Technology Co.,Ltd.

TR01 Transfer of patent right
CP03 Change of name, title or address

Address after: 201111 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Patentee after: Dayu robot Co.,Ltd.

Address before: 201111 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Patentee before: Dalu Robot Co.,Ltd.

CP03 Change of name, title or address