{"id":14930,"date":"2024-08-05T23:33:38","date_gmt":"2024-08-05T15:33:38","guid":{"rendered":"https:\/\/fgchen.com\/wpedu\/?p=14930"},"modified":"2026-03-30T14:29:55","modified_gmt":"2026-03-30T06:29:55","slug":"%e9%9b%bb%e8%a1%a8%e5%ba%a6%e6%95%b8%e7%9a%84%e8%ad%98%e5%88%a5%e5%99%a8","status":"publish","type":"post","link":"https:\/\/fgchen.com\/wpedu\/2024\/08\/%e9%9b%bb%e8%a1%a8%e5%ba%a6%e6%95%b8%e7%9a%84%e8%ad%98%e5%88%a5%e5%99%a8\/","title":{"rendered":"\u96fb\u8868\u5ea6\u6578\u7684\u8b58\u5225\u5668"},"content":{"rendered":"<p>\u8981\u5beb\u4e00\u500b\u8b58\u5225\u96fb\u8868\u5ea6\u6578\u7684\u8b58\u5225\u5668\u57fa\u672c\u7684\u6b65\u9a5f\u6982\u8ff0\uff1a<\/p>\n<h3>\u6b65\u9a5f 1\uff1a\u6578\u64da\u6536\u96c6<\/h3>\n<p>\u6536\u96c6\u5305\u542b\u96fb\u8868\u5ea6\u6578\u7684\u5716\u7247\u6578\u64da\u3002\u9019\u4e9b\u5716\u7247\u61c9\u8a72\u6db5\u84cb\u5404\u7a2e\u96fb\u8868\u7684\u985e\u578b\u548c\u4e0d\u540c\u7684\u5149\u7167\u689d\u4ef6\u3002<\/p>\n<h3>\u6b65\u9a5f 2\uff1a\u6578\u64da\u6a19\u8a3b<\/h3>\n<p>\u5c0d\u6536\u96c6\u5230\u7684\u5716\u7247\u9032\u884c\u6a19\u8a3b\uff0c\u6a19\u8a3b\u6bcf\u5f35\u5716\u7247\u4e2d\u96fb\u8868\u986f\u793a\u7684\u6578\u5b57\u3002\u4f60\u53ef\u4ee5\u4f7f\u7528\u5de5\u5177\u5982LabelImg\u4f86\u9032\u884c\u5716\u7247\u6a19\u8a3b\u3002<\/p>\n<h3>\u6b65\u9a5f 3\uff1a\u5716\u50cf\u8655\u7406<\/h3>\n<p>\u4f7f\u7528OpenCV\u7b49\u5716\u50cf\u8655\u7406\u5eab\u9032\u884c\u9810\u8655\u7406\uff0c\u5305\u62ec\uff1a<\/p>\n<ol>\n<li>\u7070\u5ea6\u8f49\u63db\uff1a\u5c07\u5f69\u8272\u5716\u7247\u8f49\u63db\u70ba\u7070\u5ea6\u5716\u7247\u3002<\/li>\n<li>\u566a\u8072\u53bb\u9664\uff1a\u61c9\u7528\u9ad8\u65af\u6a21\u7cca\u7b49\u6280\u8853\u53bb\u9664\u5716\u7247\u4e2d\u7684\u566a\u8072\u3002<\/li>\n<li>\u908a\u7de3\u6aa2\u6e2c\uff1a\u4f7f\u7528Canny\u908a\u7de3\u6aa2\u6e2c\u4f86\u627e\u5230\u96fb\u8868\u4e0a\u7684\u6578\u5b57\u908a\u754c\u3002<\/li>\n<li>\u5716\u50cf\u88c1\u526a\uff1a\u88c1\u526a\u51fa\u5305\u542b\u6578\u5b57\u7684\u5340\u57df\u3002<\/li>\n<\/ol>\n<h3>\u6b65\u9a5f 4\uff1a\u5149\u5b78\u5b57\u7b26\u8b58\u5225 (OCR)<\/h3>\n<p>\u4f7f\u7528OCR\u6280\u8853\u8b58\u5225\u5716\u7247\u4e2d\u7684\u6578\u5b57\u3002Tesseract\u662f\u4e00\u500b\u6d41\u884c\u7684\u958b\u6e90OCR\u5eab\uff0c\u53ef\u4ee5\u7528\u4f86\u8b58\u5225\u6578\u5b57\u3002<\/p>\n<pre><code class=\"language-python\">import cv2\nimport pytesseract\n\n# \u8b80\u53d6\u5716\u50cf\nimage = cv2.imread(&#039;electric_meter.jpg&#039;)\ngray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\n\n# \u9810\u8655\u7406\ngray = cv2.GaussianBlur(gray, (5, 5), 0)\nedges = cv2.Canny(gray, 50, 150)\n\n# \u4f7f\u7528Tesseract\u9032\u884cOCR\ncustom_config = r&#039;--oem 3 --psm 6 outputbase digits&#039;\ndigits = pytesseract.image_to_string(edges, config=custom_config)\n\nprint(&quot;\u8b58\u5225\u51fa\u7684\u6578\u5b57\uff1a&quot;, digits)<\/code><\/pre>\n<h3>\u6b65\u9a5f 5\uff1a\u6a5f\u5668\u5b78\u7fd2\u6a21\u578b<\/h3>\n<p>\u5982\u679cOCR\u7d50\u679c\u4e0d\u5920\u6e96\u78ba\uff0c\u53ef\u4ee5\u8003\u616e\u8a13\u7df4\u4e00\u500b\u5377\u7a4d\u795e\u7d93\u7db2\u7d61\uff08CNN\uff09\u4f86\u9032\u884c\u6578\u5b57\u8b58\u5225\u3002\u9019\u9700\u8981\u4f60\u6709\u6a19\u8a3b\u597d\u7684\u6578\u64da\u96c6\u4f86\u8a13\u7df4\u6a21\u578b\u3002<\/p>\n<ol>\n<li>\u69cb\u5efa\u6578\u5b57\u8b58\u5225\u6a21\u578b\uff1a\n<pre><code class=\"language-python\">\nimport tensorflow as tf\nfrom tensorflow.keras import layers, models<\/code><\/pre>\n<\/li>\n<\/ol>\n<h1>\u69cb\u5efa\u6a21\u578b<\/h1>\n<p>model = models.Sequential([<br \/>\nlayers.Conv2D(32, (3, 3), activation=&#039;relu&#039;, input_shape=(28, 28, 1)),<br \/>\nlayers.MaxPooling2D((2, 2)),<br \/>\nlayers.Conv2D(64, (3, 3), activation=&#039;relu&#039;),<br \/>\nlayers.MaxPooling2D((2, 2)),<br \/>\nlayers.Conv2D(64, (3, 3), activation=&#039;relu&#039;),<br \/>\nlayers.Flatten(),<br \/>\nlayers.Dense(64, activation=&#039;relu&#039;),<br \/>\nlayers.Dense(10, activation=&#039;softmax&#039;)<br \/>\n])<\/p>\n<h1>\u7de8\u8b6f\u6a21\u578b<\/h1>\n<p>model.compile(optimizer=&#039;adam&#039;,<br \/>\nloss=&#039;sparse_categorical_crossentropy&#039;,<br \/>\nmetrics=[&#039;accuracy&#039;])<\/p>\n<h1>\u8a13\u7df4\u6a21\u578b<\/h1>\n<p>model.fit(train_images, train_labels, epochs=5, validation_data=(test_images, test_labels))<\/p>\n<pre><code>2. \u4f7f\u7528\u6a21\u578b\u9032\u884c\u9810\u6e2c\uff1a\n```python\n# \u9810\u6e2c\u6578\u5b57\npredicted_digit = model.predict_classes(preprocessed_image)\nprint(&quot;\u9810\u6e2c\u7684\u6578\u5b57\uff1a&quot;, predicted_digit)<\/code><\/pre>\n<h3>\u6b65\u9a5f 6\uff1a\u7cfb\u7d71\u96c6\u6210<\/h3>\n<p>\u5c07\u5716\u50cf\u8655\u7406\u548c\u6578\u5b57\u8b58\u5225\u6a21\u578b\u96c6\u6210\u5230\u4e00\u500b\u5b8c\u6574\u7684\u7cfb\u7d71\u4e2d\uff0c\u53ef\u4ee5\u7528Python Flask\u6216Django\u4f86\u69cb\u5efa\u4e00\u500bWeb\u61c9\u7528\uff0c\u8b93\u7528\u6236\u4e0a\u50b3\u96fb\u8868\u5716\u7247\u4e26\u8fd4\u56de\u8b58\u5225\u51fa\u7684\u6578\u5b57\u3002<\/p>\n<p>\u9019\u6a23\u7684\u4e00\u500b\u8b58\u5225\u5668\u53ef\u4ee5\u7528\u4f86\u81ea\u52d5\u8b80\u53d6\u96fb\u8868\u5ea6\u6578\uff0c\u63d0\u9ad8\u6548\u7387\u4e26\u6e1b\u5c11\u4eba\u70ba\u932f\u8aa4\u3002<\/p>\n<p>Certainly! Here is a complete step-by-step tutorial for creating an electric meter digit recognition system:<\/p>\n<h3>Step 1: Data Collection<\/h3>\n<ol>\n<li><strong>Capture Images<\/strong>:\n<ul>\n<li>Collect images of electric meters. Ensure you have a variety of images with different lighting conditions and angles.<\/li>\n<li>Save these images in a folder, e.g., <code>electric_meter_images<\/code>.<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<h3>Step 2: Data Annotation<\/h3>\n<ol>\n<li><strong>Annotate Images<\/strong>:\n<ul>\n<li>Use a tool like LabelImg to annotate the images. Draw bounding boxes around the digits and label them accordingly.<\/li>\n<li>Save the annotations in a format like Pascal VOC XML or YOLO.<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<h3>Step 3: Image Preprocessing<\/h3>\n<ol>\n<li>\n<p><strong>Install Required Libraries<\/strong>:<\/p>\n<pre><code class=\"language-bash\">pip install opencv-python pytesseract tensorflow<\/code><\/pre>\n<\/li>\n<li>\n<p><strong>Preprocess Images with OpenCV<\/strong>:<\/p>\n<pre><code class=\"language-python\">import cv2\nimport numpy as np\n\ndef preprocess_image(image_path):\n   # Read the image\n   image = cv2.imread(image_path)\n   # Convert to grayscale\n   gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\n   # Apply Gaussian Blur\n   gray = cv2.GaussianBlur(gray, (5, 5), 0)\n   # Apply Canny Edge Detection\n   edges = cv2.Canny(gray, 50, 150)\n   return edges\n\n# Example usage\npreprocessed_image = preprocess_image('electric_meter_images\/example.jpg')\ncv2.imshow('Preprocessed Image', preprocessed_image)\ncv2.waitKey(0)\ncv2.destroyAllWindows()<\/code><\/pre>\n<\/li>\n<\/ol>\n<h3>Step 4: Optical Character Recognition (OCR)<\/h3>\n<ol>\n<li>\n<p><strong>Use Tesseract for OCR<\/strong>:<\/p>\n<pre><code class=\"language-python\">import pytesseract\n\ndef ocr_image(image):\n   # Custom configuration for Tesseract\n   custom_config = r'--oem 3 --psm 6 outputbase digits'\n   digits = pytesseract.image_to_string(image, config=custom_config)\n   return digits\n\n# Example usage\ndigits = ocr_image(preprocessed_image)\nprint(\"Recognized digits:\", digits)<\/code><\/pre>\n<\/li>\n<\/ol>\n<h3>Step 5: Machine Learning Model (Optional for Improved Accuracy)<\/h3>\n<ol>\n<li>\n<p><strong>Prepare Dataset<\/strong>:<\/p>\n<ul>\n<li>Split your annotated images into training and testing datasets.<\/li>\n<li>Ensure your images are resized to a consistent shape, e.g., 28&#215;28 pixels.<\/li>\n<\/ul>\n<\/li>\n<li>\n<p><strong>Build and Train a Convolutional Neural Network (CNN) with TensorFlow<\/strong>:<\/p>\n<pre><code class=\"language-python\">import tensorflow as tf\nfrom tensorflow.keras import layers, models\n\n# Load and preprocess your dataset\n# Assuming <code>train_images<\/code>, <code>train_labels<\/code>, <code>test_images<\/code>, <code>test_labels<\/code> are available\n\n# Normalize the pixel values\ntrain_images = train_images \/ 255.0\ntest_images = test_images \/ 255.0\n\n# Build the model\nmodel = models.Sequential([\n   layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)),\n   layers.MaxPooling2D((2, 2)),\n   layers.Conv2D(64, (3, 3), activation='relu'),\n   layers.MaxPooling2D((2, 2)),\n   layers.Conv2D(64, (3, 3), activation='relu'),\n   layers.Flatten(),\n   layers.Dense(64, activation='relu'),\n   layers.Dense(10, activation='softmax')\n])\n\n# Compile the model\nmodel.compile(optimizer='adam',\n             loss='sparse_categorical_crossentropy',\n             metrics=['accuracy'])\n\n# Train the model\nmodel.fit(train_images, train_labels, epochs=5, validation_data=(test_images, test_labels))<\/code><\/pre>\n<\/li>\n<li>\n<p><strong>Predict Using the Model<\/strong>:<\/p>\n<pre><code class=\"language-python\">def predict_digit(image):\n   # Preprocess the image for the model\n   image = cv2.resize(image, (28, 28))\n   image = np.expand_dims(image, axis=0)\n   image = np.expand_dims(image, axis=-1)\n   image = image \/ 255.0\n\n   # Predict the digit\n   prediction = model.predict(image)\n   digit = np.argmax(prediction)\n   return digit\n\n# Example usage\ndigit = predict_digit(preprocessed_image)\nprint(\"Predicted digit:\", digit)<\/code><\/pre>\n<\/li>\n<\/ol>\n<h3>Step 6: System Integration<\/h3>\n<ol>\n<li>\n<p><strong>Create a Web Application with Flask<\/strong>:<\/p>\n<pre><code class=\"language-bash\">pip install flask<\/code><\/pre>\n<\/li>\n<li>\n<p><strong>Create the Flask App<\/strong>:<\/p>\n<pre><code class=\"language-python\">from flask import Flask, request, jsonify\nimport cv2\nimport numpy as np\nimport pytesseract\n\napp = Flask(__name__)\n\ndef preprocess_image(image_path):\n   image = cv2.imread(image_path)\n   gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\n   gray = cv2.GaussianBlur(gray, (5, 5), 0)\n   edges = cv2.Canny(gray, 50, 150)\n   return edges\n\n@app.route('\/upload', methods=['POST'])\ndef upload_image():\n   if 'file' not in request.files:\n       return jsonify({'error': 'No file uploaded'}), 400\n   file = request.files['file']\n   file_path = f'uploads\/{file.filename}'\n   file.save(file_path)\n\n   preprocessed_image = preprocess_image(file_path)\n   digits = pytesseract.image_to_string(preprocessed_image, config=r'--oem 3 --psm 6 outputbase digits')\n   return jsonify({'digits': digits})\n\nif __name__ == '__main__':\n   app.run(debug=True)<\/code><\/pre>\n<\/li>\n<li>\n<p><strong>Run the Flask App<\/strong>:<\/p>\n<pre><code class=\"language-bash\">python app.py<\/code><\/pre>\n<\/li>\n<li>\n<p><strong>Test the Web App<\/strong>:<\/p>\n<ul>\n<li>Use a tool like Postman to upload images and check the recognized digits.<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p>By following these steps, you will have a complete system that can recognize the digits on electric meters using a combination of image processing, OCR, and potentially a machine learning model for enhanced accuracy.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u8981\u5beb\u4e00\u500b\u8b58\u5225\u96fb\u8868\u5ea6\u6578\u7684\u8b58\u5225\u5668\u57fa\u672c\u7684\u6b65\u9a5f\u6982\u8ff0\uff1a \u6b65\u9a5f 1\uff1a\u6578\u64da\u6536\u96c6 \u6536\u96c6\u5305\u542b\u96fb\u8868\u5ea6 &hellip; <\/p>\n","protected":false},"author":1,"featured_media":14759,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_import_markdown_pro_load_document_selector":0,"_import_markdown_pro_submit_text_textarea":"","fifu_image_url":"","fifu_image_alt":"","footnotes":""},"categories":[254],"tags":[],"class_list":["post-14930","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-254"],"_links":{"self":[{"href":"https:\/\/fgchen.com\/wpedu\/wp-json\/wp\/v2\/posts\/14930","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/fgchen.com\/wpedu\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/fgchen.com\/wpedu\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/fgchen.com\/wpedu\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/fgchen.com\/wpedu\/wp-json\/wp\/v2\/comments?post=14930"}],"version-history":[{"count":1,"href":"https:\/\/fgchen.com\/wpedu\/wp-json\/wp\/v2\/posts\/14930\/revisions"}],"predecessor-version":[{"id":14931,"href":"https:\/\/fgchen.com\/wpedu\/wp-json\/wp\/v2\/posts\/14930\/revisions\/14931"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/fgchen.com\/wpedu\/wp-json\/wp\/v2\/media\/14759"}],"wp:attachment":[{"href":"https:\/\/fgchen.com\/wpedu\/wp-json\/wp\/v2\/media?parent=14930"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/fgchen.com\/wpedu\/wp-json\/wp\/v2\/categories?post=14930"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/fgchen.com\/wpedu\/wp-json\/wp\/v2\/tags?post=14930"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}