id
stringlengths
12
21
username
stringclasses
6 values
license
stringclasses
6 values
title
stringlengths
34
98
publication_description
stringlengths
4.41k
109k
0CBAR8U8FakE
3rdson
none
How to Add Memory to RAG Applications and AI Agents
![1705674621330.png](1705674621330.png) Sometime in the last 5 months, I built a RAG application, and after building this RAG application, I realised there was a need to add memory to it before moving it to production. I went on YouTube and searched for videos, but I couldn’t find anything meaningful. I saw some videos, but these videos weren’t about adding persistent memory to a production-ready RAG application. They only talked about adding in-memory storage to a RAG application, which is unsuitable for a full-scale application. It was then that I realized I needed to figure things out myself and write a good article that would guide readers through the thought processes and steps needed to add memory to a RAG application or AI agent. Quick Note: If you are building with Streamlit, you can follow this tutorial to find an easy way to add memory to your Streamlit app. --- Pre-requisites : 1. Before jumping into the discussion, I want to believe you already know what RAG is and why it is needed. If you’re unfamiliar with this concept, you can read more about it [here](https://www.datacamp.com/blog/what-is-retrieval-augmented-generation-rag). 2. I also want to believe you already know how to build RAG applications. If you want to learn how to build RAG applications, you can follow my [previous article](https://app.readytensor.ai/publications/how_to_build_rag_apps_with_pinecone_openai_langchain_and_python_sBFzhbX4GpeQ). 3. For this tutorial, I used MongoDB as my traditional database, Langchain as my LLM framework and OpenAI GPT 3.5turbo as my LLM. But you can use any technologies of your choice once you have understood the workflow. 4. To follow along, `pip install` the libraries below. ``` openai python-dotenv langchain-openai pymongo ``` --- ## Now you are good to go ![1-5e5944a1.png](1-5e5944a1.png) --- # What Is Memory and Why Do RAG Applications and AI Agents Need Them? Let’s use ChatGPT as an example. When you ask ChatGPT a question like `“Who is the current president of America?”`, it will tell you ‘`‘Joe Biden“` and then if you go further to ask “How old is he?“, ChatGPT will tell you `“88”.` Now, here is the question: “How was chatGPT able to relate the second question to the first question and give you the answer you needed without you being so specific in your question?” The simple answer to this is the presence of memory. Just like the same way human beings can easily relate to past experiences or questions, ChatGPT has been built to have memory which can help it know when you are asking a question related to the previous question. In my simplest definition, and with regards to RAG and AI agents, memory or adding memory to RAG applications means making the AI agent to be able to make inferences from previous questions and give you new answers based on new questions, previous questions and previous answers. So now that you have known what memory is, the question is: How Can I Add a Memory to My RAG or AI Agent? Here is the concept I came up with. Human beings have memory because they all have a brain that stores information, and they can answer and make decisions based on the information(data) stored in their brains. So to achieve this when building an AI Agent or an RAG application, you need to also give the RAG application a brain by including the following: 1. A database (for storing user’s questions, the AI’s answer, chat IDs, the user’s email etc) 2. A function that retrieves users’ previous questions whenever a new question is asked 3. A function that uses LLM to check if the current question is related to the previous one. If it is related, it will create a new stand-alone question using the present question and previous questions. This question will now be embedded and sent to the vector database or AI agent, depending on what you are building. But if the present question is not related to the past questions, it will send the question as it is. ## Creating a Database for Storing the User’s Questions and AI’s Answers Below, I used pymongo to create a Mongo database so you can have an understanding of the kind of fields you will need. ```python from pymongo import MongoClient from datetime import datetime from bson.objectid import ObjectId # Connect to MongoDB (modify the URI to match your setup) client = MongoClient("mongodb://localhost:27017/") db = client["your_database_name"] # The name of your database collection = db["my_ai_application"] # The name of the collection # Sample document to be inserted document = { "_id": ObjectId("66c990f566416e871fdd0b43"), # you can omit this to auto-generate "question": "Who is the President of America?", "email": "[email protected]", "response": "The current president of the United States is Joe Biden.", "chatId": "52ded9ebd9ac912c8433b699455eb655", "userId": "6682632b88c6b314ce887716", "isActive": True, "isDeleted": False, "createdAt": datetime(2024, 8, 24, 7, 51, 17, 503000), "updatedAt": datetime(2024, 8, 24, 7, 51, 17, 503000) } # Insert the document into the collection result = collection.insert_one(document) print(f"Inserted document with _id: {result.inserted_id}") ``` In the code above, I created a MongoDB connection using MongoClient and connected to a specified database and collection in MongoDB. I then defined a sample document with fields like `question`, `email`, `response`, `chatId`, and `userId`, along with metadata fields such as` isActive`, `isDeleted`, `createdAt`, and `updatedAt` to track each entry's status and timestamps. The _id field is assigned using ObjectId, which you can omit to let MongoDB auto-generate it. When insert_one(document) is called, the document is inserted into the `my_ai_application` collection, and MongoDB returns a unique _id for the document, which is printed to confirm the insertion. Make sure you change your connection credentials and other specific information. Now that you have created the database and have understood the kind of fields you need in the database, let’s now see how to use the database to create a memory. ## Creating a Function That Retrieves Users’ Previous Questions Whenever a New Question Is Asked Below, we are going to define a function that retrieves the user’s last 3 questions from the database using the user’s email and the chat_id. ```python from typing import List client = MongoClient("mongodb://localhost:27017/") db = client.your_database_name collection = db.my_ai_application # no need to initialize this connection if you had already done it def get_last_three_questions(email: str, chat_id: str) -> List[str]: """ Retrieves the last three questions asked by a user in a specific chat session. Args: email (str): The user's email address used to filter results. chat_id (str): The unique identifier for the chat session. Returns: List[str]: A list containing the last three questions asked by the user, ordered from most recent to oldest. """ query = {"email": email, "chatId": chat_id} results = collection.find(query).sort("createdAt", -1).limit(3) questions = [result["question"] for result in results] return questions # Call the function past_questions = get_last_three_questions("[email protected]", "52ded9ebd9ac912c8433b699455eb655") ``` You can change this to retrieve the last five or even ten questions from the user’s database by setting `.limit(5)` or `.limit(10).` But note: These questions, together with the new question will still be passed into a system prompt later. So, you need to make sure you aren’t exceeding the input token size of your LLM. Now that you have defined a function that retrieves the past questions from the database, you need to create a new function that compares the current question with the previous questions and creates a stand-alone question if needed. But if the new question has nothing to do with the previous questions, it will push the user’s question just as it is. Creating a function that creates a standalone question by comparing the new question with the previous questions Below we are going to create a system prompt called new_question_modifier and now use this system prompt within the function we will define. It is this system prompt that does the comparing for us. Check the code below to understand how it works. ```python from langchain_openai import OpenAI from dotenv import load_dotenv # Load your OpenAI API key from .env file load_dotenv() CHAT_LLM = OpenAI() new_question_modifier = """ Your primary task is to determine if the latest question requires context from the chat history to be understood. IMPORTANT: If the latest question is standalone and can be fully understood without any context from the chat history or is not related to the chat history, you MUST return it completely unchanged. Do not modify standalone questions in any way. Only if the latest question clearly references or depends on the chat history should you reformulate it as a complete, standalone legal question. When reformulating: """ def modify_question_with_memory(new_question: str, past_questions: List[str]) -> str: """ Modifies a new question by incorporating past questions as context. This function takes a new question and a list of past questions, combining them into a single prompt for the language model (LLM) to generate a standalone question with sufficient context. If there are no past questions, the new question is returned as-is. Args: new_question (str): The latest question asked. past_questions (List[str]): A list of past questions for context. Returns: str: A standalone question that includes necessary context from past questions. """ if past_questions: past_questions_text = " ".join(past_questions) # Combine the system prompt with the past questions and the new question system_prompt = f"{new_question_modifier}\nChat history: {past_questions_text}\nLatest question: {new_question}" # Get the standalone question using the LLM standalone_question = CHAT_LLM.invoke(system_prompt) else: standalone_question = new_question return standalone_question modified_question = modify_question_with_memory(new_question="your new question here", past_questions=past_questions) ``` The code above creates a stand-alone question using the previous questions, the new question, and the new_question_modifier which is passed into an LLM (OpenAI) - But what do I really mean by a standalone question? A stand-alone question is a question that can be understood by the LLM without prior knowledge of the past conversation. Let me explain with an example…… Let’s assume your first question is, `“Who is the president of America?”` and the LLM answers `“Joe Biden”` and then you ask `“How old is he?”` The question, `how old is he?` is not a standalone question because no one can answer the question without knowing whom you are talking about. So what the function above does is: It will look at your new question `“How old is he?“` and compare it with the former question `“Who is the president of America?“` Then the LLM will ask itself, “Is the recent question related to the past questions?“ If the answer is yes, it will now modify this new question to something like `“How old is the current president of America?“` or `“How old is Joe Biden?“` and then return this new question so that it will now be embedded and sent to the vector database for similarity search. But if the answer is no, it will pass your question just as it is. This modified question is called a `stand-alone question` because anyone can understand it even without knowing the previous conversation. I hope this is clear 😁✌️ Finally, after the function has given you the standalone question, you can now send it to your embedding model and from there to your vector store for similarity search Note: All these steps must be in a single pipeline so that the output of one becomes the input of the next until the user gets his answers. I believe you understand what I’m saying 🤗 Also, don’t forget to try out different system prompts and know what works best for your use case. The system prompt I used here is just an example for you to build on. - IN CONCLUSION I developed this approach after thorough brainstorming, and while it works effectively for the most part, I’d genuinely appreciate any feedback you have. I'd also be grateful if you could share any alternative approaches you've tried that might improve upon it. See you in the comment section and thank you so much for reading HAPPY RAGING🤗🚀 You can always reach me on [X: 3rdSon__](https://x.com/3rdSon__) [LinkedIn: Victory Nnaji](https://www.linkedin.com/in/3rdson/) [GitHub: 3rd-Son](https://github.com/3rd-Son)--DIVIDER--
0hkuicWh2tKk
regmi.prakriti24
Hands on Computer Vision: Build Production-Grade Models in an Hour
:::youtube[Title]{#8em2GBD0H8g} --DIVIDER-- --- --DIVIDER--# Learning Objectives > *In this notebook, we will explore the practical implementations of some primal CV tasks like image classification, image segmentation, and object detection using modern computer vision techniques leveraging some popular pre-trained models.* By the end of this session, you will be able to: 1) Understand the applications of image classification, segmentation, and object detection. <br> 2) Use pre-trained models to perform these tasks with minimal setup. <br> 3) And, visualize the outputs of pre-trained models for test analysis. <br> --DIVIDER--# Prerequisites To ensure participants can fully engage and benefit from this workshop, the following are recommended: 1. **Basic Understanding of Python:** Familiarity with Python programming, including syntax, data structures, and basic libraries like numpy and matplotlib. 2. **Google Account:** You'll need a Google account to access and run the Colab notebook we'll be using during the webinar. 3. **Basic Understanding of Deep Learning:** No advanced expertise needed, but a basic grasp of how CNNs process images would be helpful. All required libraries and dependencies are pre-installed in the Colab environment. --DIVIDER--:::info{title="Webinar Resources"} 📝 To follow along with this webinar: 1. Use our [Google Colab Notebook](https://colab.research.google.com/drive/1oGzv7q9PqnlNMj0i0pu2ZtvEi-GkoG4N) - Sign in with your Google account - Click "Copy to Drive" to create your own editable version - All required libraries are pre-installed in Colab 2. For later reference, check our [GitHub Repository](https://github.com/readytensor/rt-cv-2024-webinar) which is also linked in the **Models** section of this webinar publication. - Contains complete code base - Additional code examples and resources - Extended documentation The presentation slides used in this webinar are also available in the **Resources** section as **"Ready Tensor Computer Vision Webinar.pdf"**. We recommend using the Colab notebook during the code review section for the smoothest experience! ::: Now, let's dive into computer vision!--DIVIDER-- --- --DIVIDER--# Image Classification Image classification is the task of identifying what's in an image by assigning it a label from a set of predefined categories. For example, determining if a photo contains a dog, cat, car, or person. When implementing image classification, you have several approaches: 1. **Build your own models from scratch** - giving you full control but requiring extensive training data and computational resources 2. **Use pre-trained models** - leveraging models already trained on large datasets like ImageNet 3. **Fine-tune pre-trained models on your specific dataset** - combining the best of both worlds For most real-world applications, using pre-trained models (approach #2) is the smart choice. These models have already learned to recognize a wide variety of visual features, allowing you to: - Get started quickly without extensive training data - Save significant time and computing resources - Often achieve better results than training from scratch In this tutorial, we'll use a pre-trained model to classify images. If you're interested in training a model on your own dataset, check out the resources section for a detailed guide on transfer learning and fine-tuning. Let's get started! 👇--DIVIDER--**Importing the Libraries** ```python import tensorflow as tf import matplotlib.pyplot as plt import glob as glob import os import cv2 import random import json import numpy as np from PIL import Image ``` <br> --DIVIDER--**Accessing the Data** ```python base_dataset_path = os.path.join("WebinarContent", "Datasets") classification_data_samples = "ImageClassification" images = os.path.join(base_dataset_path, classification_data_samples) image_paths = [os.path.join(base_dataset_path, classification_data_samples, x) for x in os.listdir(images) ] # Sort files for consistent ordering image_paths.sort() ```--DIVIDER--**Visualizing Test Sample** We will use the French Bulldog image for prediction. Let's load and display it. ```python plt.figure(figsize=(8,4)) image = plt.imread(image_paths[2]) plt.imshow(image) plt.axis("off") ```--DIVIDER--![FrenchBullDog.jpg](FrenchBullDog.jpg)--DIVIDER--<br> ## Inception V3 for Image Classification InceptionV3, introduced by **Google** in 2015, is a successor to InceptionV1 and V2. It is a convolutional neural network designed for high accuracy in image classification while being computationally efficient. The model uses convolutional, pooling, and inception modules, with inception blocks enabling the network to learn features at multiple scales using filters of varying sizes. Before we move ahead let's take a look at the images the model has been trained on. --DIVIDER--**Accessing Inception V3 Model Labels**--DIVIDER--```python class_index_file = "WebinarContent/ModelConfigs/imagenet_class_index_file.json" with open(class_index_file, 'r') as f: class_mapping = json.load(f) ``` --DIVIDER--```python class_names = [class_mapping[str(i)][1] for i in range(len(class_mapping))] print(f"Total Classes: {len(class_names)}") ``` ```bash > Total Classes: 1000 ```--DIVIDER--**Visualization of The Classes**--DIVIDER--```python random.shuffle(class_names) num_rows, num_cols = 2, 3 fig, ax = plt.subplots(num_rows, num_cols, figsize=(7, 2.5)) fig.suptitle("Sample ImageNet Classes", fontsize=12) for i, ax in enumerate(ax.flat): if i < len(class_names[:6]): ax.text(0.5, 0.5, class_names[i], ha='center', va='center', fontsize=10) ax.set_xticks([]) ax.set_yticks([]) else: ax.axis('off') ``` --DIVIDER-- ![ImageNet Classes.png](ImageNet%20Classes.png)--DIVIDER--InceptionV3 was trained on the **ImageNet** dataset, a large-scale dataset commonly used for image classification tasks, with categories ranging from animals and plants to everyday objects and scenes, which consists of over **1.2 million labeled images** across **1,000 categories**. --DIVIDER--### Loading the Inception V3 Model--DIVIDER--```python from tensorflow.keras.applications import InceptionV3 from tensorflow.keras.applications.inception_v3 import preprocess_input ```--DIVIDER--```python inception_v3_model = InceptionV3(weights='imagenet') ```--DIVIDER--**Model Input Size Check** Knowing the image shape is crucial for preprocessing, model compatibility, resource management (memory), and ensuring the model performs optimally with the given data. 1. **Model Compatibility**: Most models, including InceptionV3, expect input images of a specific shape (e.g., 299x299x3 for InceptionV3). **If the images fed into the model don't match this expected shape, the model will throw an error.** Therefore, knowing the image shape ensures that the images are preprocessed correctly to fit the model’s requirements. 2. **Data Preprocessing:** Knowing the expected input shape helps in resizing images properly. If an image is too large or too small, resizing it to the required dimensions is necessary for consistent model performance. 3. **Memory and Computational Efficiency:**: The shape of the image affects the amount of memory required to store the data. Larger images (higher resolution) require more memory. For instance, images of shape (299, 299, 3) will take up less memory than images of shape (512, 512, 3) --DIVIDER-- ```python print(inception_v3_model.input_shape) ``` ``` > (None, 299, 299, 3) ```--DIVIDER--Here, the input shape **(None, 299, 299, 3)** means the InceptionV3 model expects input images of size 299x299 pixels with 3 color channels (RGB). This shape is consistent with the pre-trained InceptionV3 model, which is designed to work with color images resized to 299x299 pixels.--DIVIDER--### Image Preprocessing--DIVIDER--```python image_paths[0] > Datasets\ImageClassification\FrenchBullDog.jpg ``` --DIVIDER----DIVIDER--```python tf_image = tf.io.read_file(image_paths[0]) #reading image decoded_image = tf.image.decode_image(tf_image) # decode the image into a tensor image_resized = tf.image.resize(decoded_image, inception_v3_model.input_shape[1:3]) # resizing the image to match the expected input shape of the model image_batch = tf.expand_dims(image_resized, axis = 0) # add an extra dimension to the image image_batch = preprocess_input(image_batch) #preprocess the image to match the input format ```--DIVIDER--### Prediction With Inception v3--DIVIDER--```python model_prediction = inception_v3_model(image_batch) decoded_model_prediction = tf.keras.applications.imagenet_utils.decode_predictions( preds = model_prediction, top = 1 ) print("Predicted Result: {} with confidence {:5.2f}%".format( decoded_model_prediction[0][0][1],decoded_model_prediction[0][0][2]*100)) plt.imshow(Image.open(image_paths[0])) plt.axis('off') plt.show() ``` ![out3.png](out3.png)--DIVIDER--> Oops ! Here the sunglasses overruled :(. But we can always use a model more specialized for our use case! --DIVIDER-- --- --DIVIDER--## Using a Specialized Pre-trained Model When a pre-trained model doesn't deliver satisfactory results for your specific use case, one option is to fine-tune the model or explore other sources that provide fine-tuned models. Fine-tuning allows you to adapt a model trained on large datasets to perform better on your specific data by updating only the last few layers of the model. Here are some sources you can utilize: 1. TensorFlow Hub 2. Hugging Face Model Hub 3. Keras Applications 4. Facebook AI Research 5. ReadyTensor's Model Hub Let's try using one of the fine-tuned models for our purpose.--DIVIDER--```python from transformers import AutoImageProcessor, AutoModelForImageClassification ```--DIVIDER--```python image_processor = AutoImageProcessor.from_pretrained("jhoppanne/Dogs-Breed-Image-Classification-V2") model = AutoModelForImageClassification.from_pretrained("jhoppanne/Dogs-Breed-Image-Classification-V2") ```--DIVIDER--This model is a fine-tuned version of `microsoft/resnet-152` on the **Stanford Dogs** dataset, achieving: ``` Loss: 1.0115 Accuracy: 84.08 % ``` Source : [Model Source](https://huggingface.co/jhoppanne/Dogs-Breed-Image-Classification-V2) --DIVIDER--```python image = Image.open("WebinarContent/Datasets/ImageClassification/FrenchBullDog.jpg") inputs = image_processor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print(f"Predicted class: {model.config.id2label[predicted_class_idx]}.") plt.imshow(image) ``` ![out4.png](out4.png)--DIVIDER-- > There you go! Just what we needed :) --DIVIDER-- --- --DIVIDER--# Object Detection Object Detection is a computer vision task that involves identifying and localizing objects within an image or video. It not only classifies objects but also uses bounding boxes to pinpoint their positions. **Key Components:** <br> 1. Localization: Identifies the object’s position with a bounding box. 2. Classification: Labels the object (e.g., "dog," "car"). 3. Confidence Score: The probability that the prediction is correct. **Techniques:** **Two-Stage Models (e.g., Faster R-CNN):** Generate region proposals first and classify them second, offering high accuracy but slower speeds. <br> **One-Stage Models (e.g., YOLO, SSD):** Predict everything in one pass, fast and suitable for real-time applications but may sacrifice some accuracy.--DIVIDER--> Lets try using the YOLO (a pretrained) model from **UltraLytics** for our object detection task --DIVIDER--:::info{title="Info"} **Why Use YOLO from Ultralytics?** Ultralytics **YOLO** models are optimized for fast and accurate inferencing, ideal for real-time tasks like object detection and segmentation. Pre-trained models can be deployed on edge devices and support formats like ONNX and TensorFlow Lite for versatile usage. So, we look forward to leveraging it. :::--DIVIDER--### YOLOv11 for Object Detection--DIVIDER--**Loading Libraries** ```python #!pip install ultralytics ```--DIVIDER--**Loading the YOLO Module** ```python from ultralytics import YOLO ```--DIVIDER--### Loading The Model ```python yolo11_model = YOLO(os.path.join("WebinarContent", "Models", "yolov11m.pt")) ```--DIVIDER--**Visualization of The Classes**--DIVIDER--```python yolo11_classes = yolo_classes = list(yolo11_model.names.values()) random.seed(0) random.shuffle(yolo11_classes) num_rows, num_cols = 3, 6 fig, ax = plt.subplots(num_rows, num_cols, figsize=(8, 2)) fig.suptitle("Sample Yolo11 Classes", fontsize=12) for i, ax in enumerate(ax.flat): if i < len(yolo11_classes[:18]): ax.text(0.5, 0.5, yolo11_classes[i], ha='center', va='center', fontsize=8) ax.set_xticks([]) ax.set_yticks([]) else: ax.axis('off') ```--DIVIDER-- ![sample_yolo_classes.png](sample_yolo_classes.png)--DIVIDER--**Visualizing Test Sample** We will use a road traffic image for object detection. Let's visualize it first.--DIVIDER--```python test_image_path = "/content/WebinarContent/Datasets/ObjectDetection/roadTraffic.png" ```--DIVIDER--```python test_image = Image.open(test_image_path) ig, ax = plt.subplots() ax.imshow(test_image) ``` ![out6.png](out6.png)--DIVIDER--### Making Predictions With YOLO v11--DIVIDER--```python results = yolo11_model.predict(test_image_path) ``` image 1/1 WebinarContent\Datasets\ObjectDetection\roadTraffic.png: 352x640 3 persons, 8 cars, 339.6ms Speed: 4.0ms preprocess, 339.6ms inference, 2.0ms postprocess per image at shape (1, 3, 352, 640) --DIVIDER--```python print(f"The number of objects detected in the image is:{len(results[0].boxes)}") ``` The number of objects detected in the image is: 11--DIVIDER--### Visualizing Object Detection Results --DIVIDER--```python prediction_ccordinates = [] predictions = [] for box in results[0].boxes: class_id = results[0].names[box.cls[0].item()] predictions.append(class_id) cords = box.xyxy[0].tolist() cords = [round(x) for x in cords] prediction_ccordinates.append(cords) conf = round(box.conf[0].item(), 2) ``` --DIVIDER--```python fig, ax = plt.subplots() ax.imshow(test_image) for i, bbox in enumerate(prediction_ccordinates): rect = plt.Rectangle((bbox[0], bbox[1]), bbox[2] - bbox[0], bbox[3] - bbox[1], linewidth=2, edgecolor='r', facecolor='none') ax.add_patch(rect) ax.text(bbox[0], bbox[1] - 10, f'{predictions[i]}', color='b', fontsize=6, backgroundcolor='none') plt.show() ``` ![out7.png](out7.png)--DIVIDER--This way using a pre-trained **YOLOv8** model for **car detection** in traffic management we can gain several benefits: 1. **Automatic Traffic Analysis**: It can count the number of vehicles, detect traffic jams, and measure the speed of cars, enabling smart traffic lights and dynamic traffic management. 2. **Parking Management**: YOLOv8 can help in detecting available parking spots by identifying parked cars in parking lots, improving the user experience in urban areas. And more These applications can significantly enhance traffic management, improve road safety, and optimize urban planning.--DIVIDER-- > **Let's try to take a level up next !** --DIVIDER-- --- --DIVIDER--# Image Segmentation This is an advanced use case where the model is applied to segment objects in an image, rather than just detecting them. Unlike traditional object detection, segmentation involves classifying each pixel in an image, allowing precise boundaries for objects like cars, people, or building. The training of **Object Detection** and **Image Segmentation** models differs mainly in the output and data requirements. Object detection models, like YOLO, produce **bounding boxes** around objects and assign class labels, requiring annotations that specify object locations. Segmentation models, like YoloV8-Seg, generate **pixel-wise masks**, assigning a class to each pixel in the image, requiring more detailed pixel-level annotations. While object detection typically uses simpler loss functions (e.g., bounding box and classification loss) and is less computationally expensive, image segmentation is more resource-intensive, requiring more complex models and loss functions (e.g., Dice loss) to provide precise object boundaries.--DIVIDER--### Loading the YOLOv11-seg Model--DIVIDER--```python segmentation_model = YOLO(os.path.join("WebinarContent", "Models", "yolov11m-seg.pt")) ```--DIVIDER--**Loading The Test Image**--DIVIDER--```python segmentation_test_image_path = os.path.join("WebinarContent", "Datasets", "ImageSegmentation", "beatles.png") img = cv2.cvtColor(cv2.imread(segmentation_test_image_path,cv2.IMREAD_COLOR), cv2.COLOR_BGR2RGB) plt.imshow(img) ``` ![out8.png](out8.png)--DIVIDER--**Accessing the Model Labels**--DIVIDER--```python yolo_seg_classes = list(segmentation_model.names.values()) classes_ids = [yolo_classes.index(clas) for clas in yolo_seg_classes] ```--DIVIDER--### Inferencing With YOLOv11-seg--DIVIDER--```python conf = 0.5 # setting threshold results = segmentation_model.predict(img, conf=conf) ```--DIVIDER--### Visualizing Segmentation Results--DIVIDER--```python colors = [random.choices(range(256), k=3) for _ in classes_ids] person_class_id = 0 for result in results: for mask, box in zip(result.masks.xy, result.boxes): points = np.int32([mask]) class_id = int(box.cls[0]) if (class_id == person_class_id ): cv2.polylines(img, points, True, (255, 0, 0), 1) color_number = classes_ids.index(int(box.cls[0])) cv2.fillPoly(img, points, colors[color_number]) plt.imshow(img) ``` ![out9.png](out9.png)--DIVIDER--Voila ! You did it!--DIVIDER--# Conclusion As we have demonstrated in this hands-on session, building production-grade computer vision systems is now achievable within an hour thanks to pre-trained models like InceptionV3 and YOLO. By leveraging these powerful models, we can quickly implement complex tasks from image classification to segmentation, making advanced computer vision capabilities readily accessible for real-world applications. --DIVIDER--# Exercises Here are some exercises to help you practice and extend what you've learned. They are arranged in increasing order of difficulty: ## 1. Model Comparison (Beginner) Try using ResNet50 instead of InceptionV3 for image classification: - Load the pre-trained ResNet50 model - Run inference on the same images - Compare the predictions and confidence scores - Which model performs better for our dog breed images? ## 2. YOLO Performance Analysis (Intermediate) Experiment with different YOLO model sizes: - Try all 5 variants - Measure and compare inference times and GPU memory usage - Analyze the trade-off between speed and accuracy - Which size would you choose for a real-time application? ## 3. Object Tracking (Intermediate) Implement object tracking in a video: - Use YOLO's tracking feature with ByteTrack - Display unique IDs for each detected object - Track objects across frames - Bonus: Add motion trails that fade over time (last 1-2 seconds of movement) ## 4. Video Segmentation with Tracking (Advanced) Combine segmentation and tracking in a video pipeline: - Load and process video files frame by frame - Apply segmentation to each frame - Track segmented objects across frames - Create an output video showing both masks and tracking IDs Tips and starter code for each exercise are available in the GitHub repository. Feel free to share your project work in a publication on Ready Tensor!--DIVIDER-- <br> ### Additional Reading Materials - Detailed overview on [AlexNet](https://papers.nips.cc/paper_files/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf), [Inception](https://arxiv.org/pdf/1409.4842), [YOLO](https://arxiv.org/pdf/1506.02640) - References to some popular data hubs [IEEE DataPort](https://ieee-dataport.org/datasets), [Hugging Face Dataset Hub](https://huggingface.co/datasets), [Ready Tensor Dataset Hub](https://app.readytensor.ai/datasets) - Guidelines to getting started with Frameworks : [Tensorflow](https://www.tensorflow.org/api_docs), [Pytorch](https://pytorch.org/docs/stable/index.html), [Ultralytics](https://www.ultralytics.com/)
0llldKKtn8Xb
ready-tensor
cc-by
The Open Source Repository Guide: Best Practices for Sharing Your AI/ML and Data Science Projects
![repo-hero-cropped.jpg](repo-hero-cropped.jpg) <p align="center"><em>Image credit: https://www.pexels.com</em></p> --DIVIDER-- # Abstract This article presents a comprehensive framework for creating and structuring AI/ML project repositories that maximize accessibility, reproducibility, and community benefit. We introduce a three-tiered evaluation system, namely, Essential, Professional, and Elite, to help practitioners assess and improve their code repositories at appropriate levels of rigor. The framework encompasses five critical categories: Documentation, Repository Structure, Environment and Dependencies, License and Legal considerations, and Code Quality. Drawing from industry standards and best practices, we provide concrete criteria, common pitfalls, and practical examples that enable AI practitioners, researchers, and students to create repositories that serve as valuable resources for both their creators and the wider community. By implementing these practices, contributors can enhance their professional portfolios while simultaneously advancing open science principles in the AI landscape. --DIVIDER--# Introduction AI and machine learning have advanced dramatically through open collaboration. The field thrives on shared knowledge, with researchers and practitioners expected to contribute their work openly. For many, public repositories serve dual purposes: showcasing personal expertise and advancing collective understanding. Yet most shared repositories fall far short of professional standards that would make them truly valuable to the community. Take a moment to examine two different AI project repositories implementing the same ResNet18 image classification model: **Repository A**: https://github.com/readytensor/rt_img_class_jn_resnet18_exampleA **Repository B**: https://github.com/readytensor/rt_img_class_jn_resnet18_exampleB **What did you notice?** The readme for Repository A contains a brief desription about the project. With no additional information related to pre-requisites, installation, implementation, and usage, visitors cannot determine how to use it or whether it's trustworthy. Most visitors spend less than 30 seconds on Repository A before moving on. Repository B provides clear organization and proper documentation. Visitors immediately understand what the project does and have enough information to use it effectively. Though both repositories contain the same technical work, one presents it in a way that builds trust and facilitates adoption. **Which repository would you want your name attached to?** The reality is that many AI/ML projects resemble Repository A. This is a missed opportunity to showcase the work effectively and benefit the community. A poorly created repository creates a negative impression that can impact career opportunities, collaboration potential, and project adoption. This article presents a comprehensive framework to help you create repositories that are not just functional but truly valuable — repositories that answer four crucial questions for visitors: 1. **What is this about?** (Clear communication of purpose and capabilities) 2. **Why should I care?** (Value proposition and applications) 3. **Can I trust it?** (Demonstrated professionalism and quality) 4. **Can I use it?** (Clear instructions and appropriate licensing) We organize best practices into five categories with three tiers of implementation (Essential, Professional, and Elite), allowing you to match your effort to project needs and resource constraints. Whether you are a student showcasing class projects, a researcher publishing code alongside a paper, or a professional building tools for broader use, these guidelines will help you create repositories that enhance your professional portfolio and contribute meaningfully to the field.--DIVIDER--:::info{title="Info"} **Important Note** 1. While this framework primarily targets AI/ML and data science projects, most concepts apply to software development repositories in general. The principles of good documentation, organization, and reproducibility benefit all code projects regardless of domain. 2. Many criteria in the current framework are specifically designed for Python-based implementations, reflecting its prevalence in AI/ML work. Future iterations will expand to address the unique requirements of other languages such as R, JavaScript, and others. 3. This article focuses on repository structure and sharing practices, not on AI/ML methodology itself. Even the most technically sound AI/ML project may fail to gain community adoption if it cannot be easily understood, trusted, and used by others. We aim to help you effectively share your work, not instruct you on how to conduct that work in the first place. :::--DIVIDER-- # Why Well-Organized Repositories Matter For AI/ML engineers and data scientists, the quality of your code repositories directly impacts your work efficiency, career progression, and contribution to the community in three fundamental ways: ![repo-best-practices-benefits.jpg](repo-best-practices-benefits.jpg) **Time Savings Through Enhanced Usability** Well-structured repositories dramatically improve your own productivity by making your work reusable and maintainable. When you properly document and organize code, you avoid spending hours rediscovering how your own implementations work months later. Data scientists frequently report spending more time understanding and fixing old code than writing new solutions. Clean dependency management prevents environment reconstruction headaches, allowing you to immediately resume work on interesting problems rather than debugging configuration issues. This organization also makes your code extensible—when you want to build on previous work, add features, or adapt models to new datasets, the foundation is solid and understandable. **Career Advancement Through Professional Demonstration** Your repositories serve as concrete evidence of your professional capabilities. Hiring managers and potential collaborators regularly evaluate GitHub profiles when assessing candidates, often placing repository quality on par with technical skills. A well-organized repository demonstrates not just coding ability but also production readiness, attention to detail, and consideration for users - all qualities highly valued in professional settings. Many data scientists find that quality repositories lead to unexpected opportunities: conference invitations, collaboration requests, and interview offers frequently come from people who discovered their well-structured work. In a field where practical implementation matters as much as theoretical knowledge, your repositories form a crucial part of your professional identity. **Community Impact Through Accessible Knowledge** The collective advancement of AI/ML depends on shared implementations and reproducible research. When you create quality repositories, you help others avoid reinventing solutions to common problems, allowing the field to progress more rapidly. Consider the frustration you have experienced trying to implement papers with missing details or the hours spent making someone else's code work. Your well-organized repository prevents others from facing these same challenges. Repositories that clearly answer what the project does, why it matters, whether it can be trusted, and how to use it become valuable community resources rather than one-time demonstrations. Every properly structured repository contributes to building a more collaborative, efficient AI ecosystem. Investing time in repository quality is not about perfectionism — it is about practical benefits that directly affect your daily work, career trajectory, and impact on the field. The framework presented in this article provides a structured approach to realizing these benefits in your own projects. --DIVIDER--# Best Practices Framework The AI repository best practices framework provides a structured approach to organizing and documenting code repositories for AI and machine learning projects. It establishes clear standards across five critical categories, with tiered implementation levels to accommodate different project stages and requirements. ## Framework Structure The framework organizes best practices into five main categories: 1. **Documentation**: The written explanations and guides that help users understand and use your project 2. **Repository Structure**: The organization of directories and files within your repository 3. **Environment and Dependencies**: The specification of software requirements and configuration needed to run your code 4. **License and Legal**: The permissions and terms governing the use of your code and associated assets 5. **Code Quality**: The technical standards and practices applied to your codebase Each category contains specific criteria that can be assessed to determine if a repository meets established standards. Rather than presenting these as an all-or-nothing requirement, the framework defines three progressive tiers of implementation: ## Implementation Tiers The best practices framework is structured into three tiers of implementation - Essential, Professional, and Elite. You can select the tier that aligns with your project goals, audience expectations, and available resources. ![implementation-tiers.jpg](implementation-tiers.jpg) | **Tier** | **Definition** | **Key Characteristics** | **Appropriate For** | |------|-------------|---------------------|-----------------| | **Essential** | Minimum standards for usefulness | • Basic understandability for first-time visitors<br>• Sufficient information for technical users<br>• Basic organizational structure | • Personal projects<br>• Course assignments<br>• Early-stage research code<br>• Proof-of-concept implementations | | **Professional** | Comprehensive documentation and organization | • Detailed guidance for various users<br>• Consistent structure and organization<br>• Complete environment specifications<br>• Established coding standards<br>• Testing frameworks and documentation | • Team projects<br>• Open-source projects with contributors<br>• Published research code<br>• Professional portfolio work<br>• Small production-quality projects | | **Elite** | Best-in-class practices | • Comprehensive project documentation<br>• Meticulous logical structures<br>• Robust dependency management<br>• Complete legal compliance<br>• Advanced quality assurance | • Major open-source projects<br>• Production-level repositories<br>• Research code for broad adoption<br>• Reference implementations | The tiered structure allows for incremental implementation, with each level building on the previous one. This progressive approach makes the framework accessible to projects of different scales and maturity levels. The framework is not prescriptive about specific technologies or tools, focusing instead on the underlying principles of good repository design. This flexibility allows it to be applied across different programming languages, AI/ML frameworks, and project types. Each criterion in the framework is designed to be objectively assessable, making it possible to evaluate repositories systematically. This assessment can be conducted manually or through automated tools that check for the presence of specific files, structural patterns, or documentation elements. In the following sections, we will explore each category in detail, examining specific criteria, providing examples, and offering implementation guidance for each tier. --DIVIDER--## Documentation Documentation is the foundation of a user-friendly repository, serving as the primary interface between your code and its potential users. Well-crafted documentation answers fundamental questions about your project: what it does, why it matters, how to use it, and what to expect from it. Unfortunately, documentation is often treated as an afterthought, creating immediate barriers to adoption. The following chart lists the common pitfalls in documentation. ![documentation-pitfalls.jpg](documentation-pitfalls.jpg) Many repositories suffer from missing or minimal README files, leaving users with no understanding of project purpose or functionality. Others lack clear installation instructions, causing users to encounter confusing errors during setup. Without usage examples, users cannot verify if the implementation meets their needs. Undocumented prerequisites and methodologies further compound these issues, leaving critical information hidden until users encounter mysterious failures. The documentation component of our framework addresses these challenges through a structured approach that scales with project complexity. The following chart lists the criteria for Essential, Professional, and Elite documentation tiers, guiding you to create effective documentation that meets user needs at every level. --DIVIDER-- ![Documentation.svg](Documentation.svg) Detailed definitions of each of the criteria are provided in the document titled `Ready Tensor Repository Assessment Framework v1.pdf` available in the **Resources** section of this publication. --DIVIDER--Let's explore the key principles of documentation at each tier. **Essential Documentation** provides the minimum information needed for basic understanding and use. It answers "What is this project?", "Can I use it?", and "How do I use it?" — enabling quick evaluation and adoption with minimal friction. **Professional Documentation** supports serious adoption by providing comprehensive setup instructions, detailed usage guides, and technical specifications. It addresses users who plan to incorporate your work into their projects, answering "How does this work under different conditions?" and "What configuration options exist?" Professional documentation also demonstrates trustworthiness for production environments by incorporating testing procedures, error handling approaches, and other reliability features that signal production readiness. **Elite Documentation** fosters a sustainable ecosystem around your project through contribution guidelines, change tracking, and contact information. It creates pathways for collaboration, answering "How can I contribute?" and "How is this project evolving?" Effective documentation transforms your repository from personal code storage into a valuable community resource, significantly increasing your project's accessibility, adoption, and impact regardless of its scale. --DIVIDER--## Repository Structure A well-organized repository structure provides a solid foundation for your AI/ML project, making it easier for users to navigate, understand, and contribute to your code. Proper structure serves as a visual map of your project's architecture and components, guiding users through your implementation. Poorly organized AI/ML repositories create significant barriers to understanding and use. The following chart illustrates common pitfalls in repository structure. ![repo-structure-pitfalls.jpg](repo-structure-pitfalls.jpg) AI/ML project repositories often exhibit a chaotic root directory filled with dozens of unrelated files, making it difficult to identify entry points or understand the project's organization. Code, configuration, and data files might be randomly mixed together without logical separation. Inconsistent or confusing naming conventions create additional cognitive load for new users trying to understand the codebase. Many repositories also lack clear boundaries between different components, such as model definition, data processing, and evaluation code. To address repository organization challenges, our framework offers systematic guidelines that adapt to project size. The chart below presents Essential, Professional, and Elite structure criteria, designed to help you create intuitive and maintainable organization. ![Repository Structure Criteria.svg](Repository%20Structure%20Criteria.svg) Let's explore the key principles of repository structure at each tier. **Essential Structure** provides the minimum level of organization needed for basic navigation and understanding. It establishes a basic modular organization with logical separation of files, consistent and descriptive naming conventions for files and directories, a properly configured .gitignore file, and clearly identifiable entry points. This level focuses on answering "Where do I find what I need?" and "How do I start using this?" **Professional Structure** enhances navigability and maintainability through specific separation of components. It organizes code in dedicated module structures (such as src/ directories with submodules), places data in designated directories, separates configuration from code, and organizes notebooks, tests, documentation, and assets in their own logical locations. Professional repositories maintain appropriate directory density (under 15 files per directory) and reasonable directory depth (no more than 5 levels deep). They also properly isolate environment configuration files and dependency management structures. This level signals that the project is built for serious use and collaboration. **Elite Structure** builds on the Professional tier with the same organizational principles applied at a higher standard of consistency and completeness. The Elite structure maintains all the same criteria as Professional repositories but with greater attention to detail and thoroughness across all components. This comprehensive organization demonstrates adherence to industry best practices, making the project immediately familiar to experienced developers. A thoughtfully designed repository structure communicates professionalism and attention to detail, significantly reducing the barrier to entry for new users while improving maintainability for contributors. It transforms your repository from a personal collection of files into an accessible, professional software project that others can confidently build upon. --DIVIDER--## Environment and Dependencies Proper environment and dependency management is critical for ensuring that AI/ML projects can be reliably reproduced and used by others. This aspect of repository design directly impacts whether users can successfully run your code without frustrating setup issues or unexpected behavior. Many repositories fail to adequately address environment configuration, leading to the infamous "works on my machine" problem. The following chart highlights common pitfalls in environment and dependency management.--DIVIDER-- ![environment-and-dependency-pitfalls.jpg](environment-and-dependency-pitfalls.jpg)--DIVIDER--Dependency management problems appear when repositories fail to specify required libraries clearly, forcing users to guess which packages they need. When dependencies do appear, they often lack version numbers, creating compatibility problems as package APIs evolve. Missing documentation about Python version requirements or hardware dependencies leads to confusing errors when users attempt to run code in unsuitable environments. The environment and dependencies section of our framework provides solutions that grow with project sophistication. Below are the tiered criteria (Essential, Professional, and Elite) that guide reproducible environment setup.--DIVIDER-- ![Environment and Dependencies Criteria.svg](Environment%20and%20Dependencies%20Criteria.svg)--DIVIDER-- Let's explore the key principles of environment and dependency management at each tier. **Essential Environment Management** provides the minimum information needed for basic reproducibility. It clearly lists all project dependencies in standard formats such as requirements.txt, setup.py, or pyproject.toml. This level focuses on answering "What packages do I need to install?" allowing users to at least attempt to recreate the necessary environment. **Professional Environment Management** enhances reproducibility and ease of setup by pinning specific dependency versions to ensure consistent behavior across installations. It organizes dependencies into logical groups (core, dev, test) through separate requirement files or configuration options. Professional repositories specify required Python versions and include configuration for virtual environments such as environment.yml (conda), Pipfile (pipenv), or poetry.lock (poetry). This level provides confidence that the project can be reliably set up and run in different environments. **Elite Environment Management** optimizes for complete reproducibility and deployment readiness. It provides exact environment specifications through lockfiles, documents GPU-specific requirements including CUDA versions when applicable, and includes containerization through Dockerfiles or equivalent solutions. This comprehensive approach ensures that users can recreate the exact execution environment regardless of their underlying system, eliminating "it works on my machine" issues entirely. Proper environment and dependency management transforms your repository from a collection of code that runs only in specific conditions into a reliable, reproducible project that users can confidently deploy in their own environments. This attention to reproducibility demonstrates professional rigor and significantly increases the likelihood that others will successfully use and build upon your work. --DIVIDER--## License and Legal Proper licensing and legal documentation is a critical aspect of AI/ML repositories that is frequently overlooked. Without clear licensing, potential users cannot determine whether they can legally use, modify, or build upon your work, regardless of its technical quality. Many repositories either omit licenses entirely or include inappropriate licenses for their content. The following chart highlights common pitfalls in licensing and legal aspects.--DIVIDER-- ![license-legal-pitfalls.jpg](license-legal-pitfalls.jpg)--DIVIDER--Legal issues arise when repositories operate without licenses, creating ambiguity that prevents use by organizations with compliance concerns. Some repositories include licenses that conflict with their dependencies, while others neglect the unique legal aspects of AI/ML work regarding data and model rights. The absence of copyright notices and unclear terms for incorporated datasets or pretrained models further complicates legitimate use. For proper licensing and legal considerations, our framework provides clear benchmarks at varying complexity levels. The following chart presents Essential, Professional, and Elite tier criteria for legal compliance and clarity.--DIVIDER-- ![License and Legal Criteria.svg](License%20and%20Legal%20Criteria.svg)--DIVIDER-- Let's explore the key principles of licensing and legal documentation at each tier. **Essential Legal Documentation** ensures that users can determine basic usage rights. It includes a recognized license file (LICENSE, LICENSE.md, or LICENSE.txt) in the root directory that explicitly states terms of use, modification, and distribution. The chosen license must be appropriate for the project's purpose, dependencies, and intended use, avoiding unclear or conflicting terms. This level answers the fundamental question: "Am I legally permitted to use this?" **Professional Legal Documentation** enhances legal clarity by addressing AI/ML-specific concerns. In addition to proper licensing, it includes clear documentation of data usage rights, stating ownership, licensing, compliance requirements, and restrictions for any datasets used or referenced. Similarly, it documents model usage rights, specifying ownership, licensing terms, and redistribution policies for any ML models included or referenced. This level provides confidence that the project can be legally used in professional contexts. **Elite Legal Documentation** establishes a comprehensive legal framework supporting long-term community engagement. It builds on the Professional tier by adding explicit copyright statements in source files and documentation to prevent ambiguity in legal rights and attribution. Elite repositories also include a Code of Conduct that outlines contributor behavior expectations, enforcement mechanisms, and reporting guidelines to foster an inclusive and respectful environment. This level demonstrates commitment to professional standards and community values. Proper licensing and legal documentation transforms your repository from a potentially risky resource into a legally sound project that organizations and individuals can confidently incorporate into their work. This attention to legal concerns removes a significant barrier to adoption and signals professionalism to potential users and contributors.--DIVIDER--## Code Quality Code quality is the foundation of maintainable, reliable AI/ML projects. While functional code can deliver results, high-quality code enables long-term sustainability, collaboration, and trust in your implementation. In AI/ML repositories, functionality frequently takes precedence over quality, resulting in maintainability and reliability issues. The following chart highlights common code quality pitfalls.--DIVIDER-- ![code-quality-pitfalls.jpg](code-quality-pitfalls.jpg)--DIVIDER--Code quality issues manifest in sprawling, monolithic scripts that defy debugging efforts. Excessive function length and high cyclomatic complexity make maintenance difficult. The prevalence of hardcoded values, minimal error handling, and lack of tests results in brittle, unpredictable code. In the AI/ML context, missing random seed settings compromise reproducibility, while poorly documented notebooks obscure the development process.--DIVIDER--Our framework tackles code quality through graduated standards appropriate for different project stages. The chart below details the Essential, Professional, and Elite criteria that promote maintainable, reliable code as projects evolve--DIVIDER-- ![Code Quality Criteria.svg](Code%20Quality%20Criteria.svg)--DIVIDER--Let's explore the key principles of code quality at each tier. **Essential Code Quality** establishes basic maintainability by organizing code into functions and methods rather than monolithic scripts, keeping individual scripts under 500 lines, and implementing basic error handling through try/except blocks. It uses dedicated configuration files to separate parameters from code logic and sets random seeds to ensure reproducibility. For notebooks, it maintains reasonable cell length (under 100 lines) and includes markdown documentation (at least 10% of cells). This level provides the minimum quality needed for others to understand and use your code. **Professional Code Quality** significantly enhances maintainability and reliability by implementing comprehensive best practices. Functions are kept under 50 lines, code duplication is limited, and hardcoded constants are minimized. Professional repositories use environment variables for sensitive configurations, implement logging, include tests with framework support, and provide docstrings with parameter and return documentation. They also implement type hints, use style checkers for consistent formatting, control function complexity, and include data validation. For notebooks, they import custom modules and manage output cells properly. This level demonstrates serious software engineering practices. **Elite Code Quality** takes quality to production-grade standards by adding advanced practices such as comprehensive logging configuration, custom exception classes, and test coverage metrics. These repositories represent the highest standard of code quality, suitable for critical production environments and long-term maintenance. High-quality code communicates professionalism and reliability, significantly increasing confidence in your implementation. This attention to quality transforms your repository from working code into trustworthy software that others can confidently build upon, adapt, and maintain over time. --DIVIDER--# Implementation Guide with Examples The following steps outline a practical approach to creating high-quality AI/ML project repositories. For detailed examples of repository structures and README templates at each implementation tier, see **Appendix A: Sample Repository Structures** and **Appendix B: Sample README Structures**. ### Step 1: Select an Appropriate Template Choose a repository structure that matches your project complexity and goals: - **Essential**: For personal projects, educational demonstrations, or proof-of-concepts - **Professional**: For team projects, research code intended for publication, or open-source contributions - **Elite**: For production systems, major open-source projects, or reference implementations Refer to **Appendix A: Sample Repository Structures** for detailed examples at each tier. Customize these templates to fit your specific needs while maintaining the core organizational principles. Remember that even a small project can benefit from good structure. ### Step 2: Choose the Right License Select a license appropriate for your project's content and intended use: - **MIT License**: Permissive license good for most software projects, allowing commercial use - **Apache 2.0**: Similar to MIT but with patent protections - **GPL (v3)**: Strong copyleft license requiring derivative works to be open-sourced - **Creative Commons**: Various options for non-software content like datasets or documentation Consider the licenses of your dependencies, as they may constrain your options. Ensure your license is compatible with the libraries and frameworks you use. ### Step 3: Implement Environment and Dependency Management Choose the appropriate dependency management approach for your project: - **Essential**: `requirements.txt` listing direct dependencies - **Professional**: - Pinned version numbers (`numpy==1.21.0` instead of just `numpy`) - Separated requirements files for different purposes - Virtual environment configuration (conda, venv, etc.) - **Elite**: - Lockfiles for exact reproduction (poetry.lock, Pipfile.lock) - Containerization with Docker - Environment variables for configuration Document any non-Python dependencies or system requirements clearly in your README. ### Step 4: Create a Structured README Develop a README that matches your target implementation tier. A well-structured README is critical as it's often the first thing visitors see when discovering your project. When creating your README: - Focus on answering the four key questions: what the project is about, why users should care, whether they can trust it, and how they can use it - Match the detail level to your target tier (Essential, Professional, or Elite) - Include examples and code snippets where appropriate - Consider adding screenshots or diagrams for visual clarity Refer to **Appendix B: Sample README Structures** for detailed templates at each implementation tier, from basic structures covering essential information to comprehensive documents that support serious adoption and community engagement. ### Step 5: Follow Coding Best Practices Adopt established coding standards appropriate for your language: - **Python**: - Follow PEP 8 style guidelines (consistent indentation, naming conventions, etc.) - Use type hints for function signatures - Write docstrings for modules, classes, and functions - Consider using linters and formatters (black, flake8, pylint) - **Markdown**: - Use proper heading hierarchy - Include code blocks with language specification - Use lists, tables, and emphasis consistently - Add alt text to images for accessibility - **General Practices**: - Keep functions small and focused on a single task - Write descriptive variable and function names - Include comments explaining "why" not just "what" - Control script and function length - Set random seeds for reproducibility in AI/ML code For a comprehensive list of relevant tools and references, see the **Additional Resources** section at the end of this article, which includes links to code style guides, repository templates, documentation tools, and dependency management solutions.--DIVIDER--# Tools and Resources The following tools can significantly reduce the effort required to implement best practices in your repositories: ## Documentation Tools - [Sphinx](https://www.sphinx-doc.org/): Python documentation generator - [ReadTheDocs](https://readthedocs.org/): Documentation hosting platform - [Markdown Guide](https://app.readytensor.ai/publications/LX9cbIx7mQs9): Project documentation with Markdown - [Jupyter Book](https://jupyterbook.org/): Create publication-quality books from notebooks - [Docstring Conventions](https://app.readytensor.ai/publications/DM3Ao23CIocT): Guide for Python docstrings ## Repository Structure and Templates - [Cookiecutter](https://github.com/cookiecutter/cookiecutter): Project template tool - [Cookiecutter Data Science](https://github.com/drivendata/cookiecutter-data-science): Template for data science projects - [PyScaffold](https://github.com/pyscaffold/pyscaffold): Project generator for Python packages - [nbdev](https://nbdev.fast.ai/): Create Python packages from Jupyter notebooks ## Dependency Management - [uv](https://github.com/astral-sh/uv): Fast Python package installer and resolver - [Poetry](https://python-poetry.org/): Python packaging and dependency management - [Conda](https://docs.conda.io/): Package and environment management system - [pip-tools](https://github.com/jazzband/pip-tools): Set of tools for managing pip-compiled requirements - [Pipenv](https://pipenv.pypa.io/): Python development workflow tool - [Docker](https://www.docker.com/): Containerization platform ## Code Quality and Testing - [Pre-commit](https://pre-commit.com/): Git hook scripts manager - [Black](https://black.readthedocs.io/): Uncompromising Python code formatter - [Flake8](https://flake8.pycqa.org/): Python code linter - [Pylint](https://pylint.org/): Python static code analysis tool - [mypy](https://mypy.readthedocs.io/): Static type checker for Python - [pytest](https://docs.pytest.org/): Python testing framework - [Coverage.py](https://coverage.readthedocs.io/): Code coverage measurement for Python ## License Resources - [License Guide](https://app.readytensor.ai/publications/qWBpwY20fqSz): A primer on licenses for ML projects - [Choose a License](https://choosealicense.com/): Help picking an open source license - [Open Source Initiative](https://opensource.org/licenses): License information and standards - [TL;DR Legal](https://tldrlegal.com/): Software licenses explained in plain English - [Creative Commons](https://creativecommons.org/licenses/): Licenses for non-code assets ## Style Guides and Standards - [PEP 8](https://app.readytensor.ai/publications/pCgumBWFPD90): Style Guide for Python Code - [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html): Comprehensive style guide - [Docstrings Guide](https://app.readytensor.ai/publications/DM3Ao23CIocT): Python Docstrings for Machine Learning code These tools address different aspects of repository quality, offering options for projects of all scales. Select tools that match your project needs and team capabilities rather than adopting everything at once. --DIVIDER--# Conclusion Well-structured repositories are essential for the success of AI/ML projects in the wider community. Our framework addresses five fundamental aspects of repository quality: 1. **Documentation** that communicates purpose, usage, and technical details 2. **Repository Structure** that organizes code logically 3. **Environment and Dependencies** that enable reproducibility 4. **License and Legal** considerations that establish usage rights 5. **Code Quality** standards that ensure maintainability The tiered approach, namely Essential, Professional, and Elite, allows you to match your effort to project needs and resource constraints. By evaluating your repositories against this framework, you can systematically improve their quality and impact. This will not only benefit your work efficiency and career prospects but also contribute to the wider AI/ML community.--DIVIDER--# Appendices--DIVIDER--## Appendix A: Sample Repository Structures This appendix provides example repository structures for AI/ML projects at the Essential, Professional, and Elite levels. These examples are starting points that should be adapted to your specific project requirements, technology stack, and team preferences. ### A.1 Essential Repository Structure This basic structure is suitable for simple projects, educational demonstrations, or exploratory research work primarily using Jupyter notebooks: ``` project-name/ │ ├── README.md # Essential project information ├── LICENSE # Appropriate license file ├── requirements.txt # Project dependencies ├── .gitignore # Configured for Python/Jupyter │ ├── notebooks/ # Organized notebooks │ ├── 01_data_exploration.ipynb │ ├── 02_preprocessing.ipynb │ └── 03_model_training.ipynb │ ├── data/ # Data directory (often gitignored) │ ├── .gitkeep # Placeholder to track empty directory │ └── README.md # Data acquisition instructions │ └── models/ # Saved model files (often gitignored) └── .gitkeep # Placeholder to track empty directory ``` **Key Characteristics:** - Clear separation of notebooks, data, and models - Sequential naming of notebooks to indicate workflow - Basic documentation with README files - Simple dependency management with requirements.txt ### A.2 Professional Repository Structure This structure is appropriate for more advanced projects, team collaborations, or code intended for wider distribution: ``` project-name/ │ ├── README.md # Comprehensive project documentation ├── LICENSE # Appropriate license file ├── setup.py # Package installation configuration ├── requirements.txt # Core dependencies ├── requirements-dev.txt # Development dependencies ├── pyproject.toml # Python project metadata ├── .gitignore # Configured for project needs │ ├── src/ # Source code package │ └── project_name/ # Main package directory │ ├── __init__.py # Package initialization │ ├── data/ # Data processing modules │ │ ├── __init__.py │ │ ├── loader.py │ │ └── preprocessor.py │ ├── models/ # Model implementation modules │ │ ├── __init__.py │ │ └── model.py │ ├── utils/ # Utility functions │ │ ├── __init__.py │ │ └── helpers.py │ └── config.py # Configuration parameters │ ├── notebooks/ # Jupyter notebooks (if needed) │ ├── exploration.ipynb │ └── evaluation.ipynb │ ├── tests/ # Test modules │ ├── __init__.py │ ├── test_data.py │ └── test_models.py │ ├── docs/ # Documentation files │ ├── usage.md │ ├── api.md │ └── examples.md │ ├── data/ # Data directory (often gitignored) │ └── README.md # Data acquisition instructions │ └── models/ # Saved model outputs (often gitignored) └── README.md # Model usage information ``` **Key Characteristics:** - Proper Python package structure with `src` layout - Modular organization of code with clear separation of concerns - Comprehensive documentation in dedicated directory - Test directory that mirrors package structure - Separated dependency specifications for different purposes ### A.3 Elite Repository Structure This structure demonstrates a comprehensive repository setup suitable for production-level projects, major open-source initiatives, or reference implementations: ``` project-name/ │ ├── README.md # Main documentation with quick start guide ├── LICENSE # Appropriate license file ├── CHANGELOG.md # Version history and changes ├── CONTRIBUTING.md # Contribution guidelines ├── CODE_OF_CONDUCT.md # Community standards ├── setup.py # Package installation ├── pyproject.toml # Python project config (PEP 518) ├── poetry.lock # Locked dependencies (if using Poetry) ├── requirements/ # Dependency specifications │ ├── base.txt # Core requirements │ ├── dev.txt # Development requirements │ ├── test.txt # Testing requirements │ └── docs.txt # Documentation requirements ├── Dockerfile # Container definition ├── docker-compose.yml # Multi-container setup ├── .gitignore # Git ignore patterns ├── .pre-commit-config.yaml # Pre-commit hook configuration ├── .github/ # GitHub-specific configurations │ ├── workflows/ # CI/CD workflows │ └── ISSUE_TEMPLATE/ # Issue templates │ ├── src/ # Source code package │ └── project_name/ # Main package │ ├── __init__.py # Package initialization with version │ ├── cli.py # Command-line interface │ ├── config.py # Configuration management │ ├── exceptions.py # Custom exceptions │ ├── logging.py # Logging configuration │ ├── data/ # Data processing │ ├── models/ # Model implementations │ └── utils/ # Utility functions │ ├── scripts/ # Utility scripts │ ├── setup_environment.sh │ └── download_datasets.py │ ├── notebooks/ # Jupyter notebooks (if applicable) │ └── examples/ # Example notebooks │ ├── tests/ # Test suite │ ├── conftest.py # Test configuration │ ├── integration/ # Integration tests │ └── unit/ # Unit tests organized by module │ ├── docs/ # Documentation │ ├── conf.py # Sphinx configuration │ ├── index.rst # Documentation home │ ├── installation.rst # Installation guide │ ├── api/ # API documentation │ ├── examples/ # Example usage │ └── _static/ # Static content for docs │ ├── data/ # Data directory (structure depends on project) │ ├── raw/ # Raw data (often gitignored) │ ├── processed/ # Processed data (often gitignored) │ └── README.md # Data documentation │ └── models/ # Model artifacts ├── trained/ # Trained models (often gitignored) ├── pretrained/ # Pretrained models └── README.md # Model documentation ``` **Key Characteristics:** - Comprehensive community documents (CONTRIBUTING, CODE_OF_CONDUCT) - Advanced dependency management with separated requirements - Containerization for reproducible environments - CI/CD configuration for automated testing and deployment - Extensive documentation with proper structure - Clear separation of all project components ### Adapting These Structures These sample structures serve as templates that should be adapted based on: 1. **Project Size and Complexity**: Smaller projects may not need all components shown in the Professional or Elite examples. Include only what serves your project's needs. 2. **Technology Stack**: While these examples focus on Python-based projects, adjust directory structures for other languages or frameworks accordingly. 3. **Team Conventions**: Align with existing conventions your team has established for consistency across projects. 4. **Project Type**: Different AI/ML applications may require specialized structures: - Time series forecasting projects might need additional data versioning - Computer vision projects might require separate directories for images/videos - NLP projects might benefit from corpus and vocabulary management structures 5. **Deployment Context**: Projects deployed as APIs, web applications, or embedded systems will need additional structure to support their deployment environments. Remember that repository structure should facilitate development and use—not impose unnecessary overhead. Start with the simplest structure that meets your needs and expand as your project grows in complexity. --DIVIDER--## Appendix B: Sample README Structures This appendix provides example README structures for AI/ML projects at the Essential, Professional, and Elite levels. These templates offer a starting point that should be customized to fit your specific project needs and audience. ### B.1 Essential README Structure This basic structure covers the minimum needed for a useful README: ```markdown # Project Name Brief description of the project. ## Overview Detailed explanation of what the project does and why it's useful. ## Installation Basic installation instructions. ## Usage Simple examples of how to use the project. ## License Information about the project's license. ``` **Key Characteristics:** - Clear project identity with title and description - Basic explanation of purpose and value - Simple instructions for installation and use - License information for legal clarity ### B.2 Professional README Structure This comprehensive structure supports serious adoption: ```markdown # Project Name Brief description of the project. ## Overview Detailed explanation of what the project does and why it's useful. ## Target Audience Who this project is intended for. ## Prerequisites Required knowledge, hardware, and system compatibility. ## Installation Step-by-step installation instructions. ## Environment Setup Environment and dependency information. ## Usage Detailed usage instructions with examples. ## Data Requirements Expected data formats and setup. ## Testing How to run tests for the project. ## Configuration Information on configuration options. ## License Information about the project's license. ## Contributing Guidelines for contributing to the project. ``` **Key Characteristics:** - Comprehensive project description with target audience - Detailed prerequisites and installation steps - Thorough usage documentation with examples - Technical details on data, testing, and configuration - Community engagement through contribution guidelines ### B.3 Elite README Structure This advanced structure creates a complete resource for all users: ```markdown # Project Name Brief description of the project. ## Overview Detailed explanation of what the project does and why it's useful. ## Target Audience Who this project is intended for. ## Prerequisites Required knowledge, hardware, and system compatibility. ## Installation Step-by-step installation instructions. ## Environment Setup Environment and dependency information. ## Usage Detailed usage instructions with examples. ## Data Requirements Expected data formats and setup. ## Testing How to run tests for the project. ## Configuration Information on configuration options. ## Methodology Explanation of the approach and algorithms. ## Performance Benchmarks and performance expectations. ## License Information about the project's license. ## Contributing Guidelines for contributing to the project. ## Changelog Version history and key changes. ## Citation How to cite this project in academic work. ## Contact How to reach the maintainers. ``` **Key Characteristics:** - All elements from Professional README - Technical depth with methodology and performance sections - Project history through changelog - Academic integration with citation information - Maintainer accessibility through contact information ### Customizing README Content These templates provide structure, but effective READMEs require thoughtful content: 1. **Project Description**: Be clear and specific about what your project does. Avoid vague descriptions and technical jargon without explanation. 2. **Examples**: Include concrete, runnable examples that demonstrate key functionality. Code snippets should be complete enough to execute with minimal modification. 3. **Visual Elements**: Consider adding diagrams, screenshots, or other visual elements that clarify complex concepts or demonstrate the project in action. 4. **Audience Adaptation**: Adjust technical depth based on your expected audience. Research projects may include more mathematical detail, while application-focused projects should emphasize practical usage. 5. **Maintenance Status**: Clearly indicate the current maintenance status of the project, especially for open-source work. Remember that a README is often the first interaction users have with your project. It should provide enough information for users to quickly determine if the project meets their needs and how to get started using it.
0z4EC8313LzS
ready-tensor
mit
Time Series Step Classification Benchmark
![hero.jpg](hero.jpg)--DIVIDER--# Introduction In the field of time series analysis, step classification plays a critical role in interpreting sequential data by assigning class labels to each time step. This study presents a comprehensive benchmark of 25 machine learning models trained on five distinct datasets aimed at improving time series step classification accuracy. We evaluated each model's performance using four key metrics: accuracy, precision, recall, and F1-score. Our analysis provides insights into the effectiveness of various modeling approaches across different types of time series data, highlighting the strengths and limitations of each model. The results indicate significant variations in model performance, underscoring the importance of tailored model selection based on specific characteristics of the dataset and the classification task. This study not only guides practitioners in choosing appropriate models for time series step classification but also contributes to the ongoing discourse on methodological advancements in time series analysis.--DIVIDER--# Datasets | dataset | # of series | # classes | # features | min series length | max series length | time frequency | source link | | -------------------------- | :----------: | :-------: | :--------: | :---------------: | :---------------: | :------------: | ----------------------------------------------------------------------------------- | | har70plus | 18 | 7 | 6 | 871 | 1536 | OTHER | [link](https://archive.ics.uci.edu/dataset/780/har70) | | hmm_continuous | 500 | 4 | 3 | 50 | 300 | OTHER | synthetic | | multi_frequency_sinusoidal | 100 | 5 | 2 | 109 | 499 | OTHER | synthetic | | occupancy_detection | 1 | 2 | 5 | 20560 | 20560 | SECONDLY | [link](https://archive.ics.uci.edu/dataset/357/occupancy+detection) | | pamap2 | 9 | 12 | 31 | 64 | 2725 | OTHER | [link](https://archive.ics.uci.edu/dataset/231/pamap2+physical+activity+monitoring) | The HAR70 and PAMAP2 datasets are an aggregated version of the datasets from the UCI Machine Learning Repository. Data were mean aggregated to create a dataset with fewer time steps. The datasets repository is available [here](https://github.com/readytensor/rt_datasets_time_step_classification)--DIVIDER--# Models Our benchmarking study on time series step classification evaluates a diverse array of models, which we have categorized into two main types: Machine Learning (ML) models and Neural Network models. Each model is assessed individually to understand its specific performance characteristics and suitability for different types of time series data. ## Machine Learning Models This category includes 17 ML models, each selected for its unique strengths in pattern recognition and handling of sequential dependencies within time series data. These models range from robust ensemble methods to basic regression techniques, providing a comprehensive overview of traditional machine learning approaches in time series classification. Examples of models in this category include: Random Forest, K-Nearest Neighbors and Logistic Regression ## Neural Network Models Comprising 7 models, this category features advanced neural network architectures that excel in capturing intricate patterns and long-range dependencies in data through deep learning techniques. These models are optimized for handling large datasets and complex classification tasks that might be challenging for traditional ML models. Examples of models in this category include: LSTM and CNN ## Special Mention Additionally, our study includes the Distance Profile model, which stands apart from the conventional categories. This model employs a technique based on computing the distances between time series data points, providing a unique approach to classification that differs from typical machine learning or neural network methods. For more information on distance profile, checkout the [Distance Profile for Time-Step Classification in Time Series Analysis](https://app.readytensor.ai/publications/distance_profile_for_time-step_classification_in_time_series_analysis_ljGAbBceZbpv) publication.--DIVIDER--# Results Each model, regardless of its category, is evaluated on its own merits across various datasets to pinpoint the most effective approaches for time series step classification. We have averaged the performance metrics for each model across all datasets. This consolidated data is presented in a heat map, where models are listed on the y-axis and the metrics—accuracy, precision, recall, and F1-score—on the x-axis. The values in the table represent the average of each metric for a model across all datasets, providing a clear, visual comparison of how each model performs generally in time series step classification. This method allows us to succinctly demonstrate the overall performance trends and identify which models consistently deliver the best results across various conditions. ![leaderboard.png](leaderboard.png)--DIVIDER--1. Top Performers Boosting algorithms and advanced ensemble methods generally perform exceptionally well in the task of time series step classification. The top performers include: • CatBoost (0.80): Excels in managing complex features and imbalanced datasets, consistently delivering high performance. • LightGBM (0.78): Known for its efficiency and accuracy, especially in large datasets, with strong overfitting prevention. • Hist Gradient Boosting (0.77): A powerful algorithm that builds on the strength of traditional gradient boosting by optimizing performance with histogram-based methods. • XGBoost (0.77): Offers robustness and scalability, making it an ideal choice for handling large datasets and complex tasks. • Stacking (0.77): Combines multiple models to improve prediction accuracy, performing strongly in time series classification. 2. Strong Contenders These models show good F1-scores but are not at the very top. They are reliable and can be considered for use cases where the top performers might be computationally expensive or overfit: • Gradient Boosting (0.75): A solid model that performs well in a variety of conditions. • Extra Trees (0.75) and Random Forest (0.75): These ensemble models provide robust performance, benefiting from their ability to reduce prediction variance. 3. Baseline or Average Performers These models perform moderately well and may serve as baselines or options when computational simplicity is desired: • Bagging (0.74) and SVC (0.74): Both provide reasonable performance, though not as strong as the top models. • CNN, RNN, and LSTM (all 0.73): Neural networks tailored for sequential data, performing moderately well in this context. • Voting (0.73): A basic ensemble method that combines predictions from multiple models, offering solid but average results. • MLP, ANN, and LSTM-CNN (all 0.72): These neural networks exhibit potential but may require additional tuning to excel in time series step classification. 4. Below Average Performers These models have lower F1-scores and might need substantial tuning or are inherently less suitable for time series step classification: • Logistic Regression (0.66), Ridge (0.64), and Decision Tree (0.63): These simpler models struggle to capture the complex temporal dependencies in time series data. • Passive Aggressive (0.63) and Distance Profile (0.62): These models perform less effectively, likely due to their sensitivity to noise and outliers in the dataset. • KNN (0.61): Its performance is hindered by high dimensionality and noise, which are common in time series data. • AdaBoost (0.60): Despite being a boosting algorithm, it underperforms, likely due to its sensitivity to noise and imbalanced datasets.--DIVIDER--# Conclusion Our benchmarking study has provided a comprehensive evaluation of 25 different models across four diverse datasets, focusing on the task of time series step classification. The results highlight the general efficacy of boosting algorithms—specifically CatBoost, LightGBM, and XGBoost—in managing the complexities associated with time series data, with the notable exception of AdaBoost, which did not perform as well. The table visualization of average accuracy, precision, recall, and F1-score across all models and datasets has offered a clear and succinct comparison, underscoring the strengths and potential areas for improvement in each model. This analysis not only assists in identifying the most suitable models for specific types of time series classification tasks but also sheds light on the broader applicability of machine learning techniques in this evolving field. As we continue to advance our understanding of time series analysis, it is crucial to consider not just the accuracy but also the computational efficiency and practical applicability of models in real-world scenarios. Future studies may explore the integration of more complex neural network architectures or the development of hybrid models that can leverage the strengths of both traditional machine learning and neural networks to further enhance classification performance. In conclusion, this study serves as a valuable resource for researchers and practitioners in selecting the right models for their specific needs, ultimately contributing to more effective and efficient time series analysis and classification.
1yiSfLXTffSF
aryan_patil
none
UV: The Next Generation Python Package Manager Built for Speed
![UV.png](UV.png)--DIVIDER--# TL;DR UV is a Rust-built Python package manager that's 10-100x faster than pip/poetry/conda, combining virtual environment creation and dependency management in one tool while maintaining compatibility with existing Python standards.--DIVIDER--# Introduction The evolution in Python has been closely linked to improvements in package management, from manual installations to modern tools like pip and poetry. Yet, as projects become more and more complex, conventional tools struggle to keep up with the demands for speed and efficiency. UV is a modern, high-performance, cutting-edge Python package and project manager developed in Rust. It represents a new generation of Python package managers as it serves as a replacement for traditional Python package management tools like pip and poetry. It combines the functionality of tools like pip, poetry, and virtualenv and streamlines tasks like dependency management, script execution, and project building offering significant improvement in speed and reliability. It is designed to address common challenges in the Python ecosystem such as lengthy installation times, dependency conflicts, and complexity of managing environments. UV accomplishes this by implementing an innovative architecture and effectively, delivering 10 to 100 times faster speed than the conventional package managers. Its key features include support for editable installations, Git and URL-based dependencies, constraint files, custom package indexes, and more. UV's standards-compliant virtual environments integrate smoothly with other tools, eliminating the need for lock-in or extensive customization. It is cross-platform, compatible with Linux, Windows, and macOS, and has undergone rigorous testing against the PyPI index. --DIVIDER--# Key Features - Speed: UV is blazingly faster than the traditional tools like pip, and dramatically reduces the time required to install packages. - Optimization: Saves storage by using a global cache for dependency deduplication. - Flexible Installation Options: Can be installed effortlessly using `curl`, `pip`, or`pipx` with no need for Python or Rust to be pre-installed. - Cross-Platform Support: Runs on macOS, Linux, and Windows, supporting a wide range of advanced functionalities. - Enhanced Dependency Management: Includes features like version overrides, alternative resolution methods, and resolver for tracking conflicts. - Error Messaging: Provides a detailed and clear error message, simplifying conflict resolution for developers. - Consolidated Tooling: Integrate the capabilities of tools like `pip`, `pipx`, `poetry`, `pyenv`, into one solution. - Project and Script Management: Handles Python version management, runs scripts with inline dependency metadata, and facilitates workflows. --DIVIDER--# Installation Installing UV is quick and straightforward. You can choose installers or install it directly from `PyPl`. Before using UV, it is necessary to add its path to the environment variables. On Linux and macOS, you can update the PATH environment variable by running the following command in the terminal: `export PATH="/path/to/uv:$PATH"` For Windows, to add directory to the PATH environment variable for both user systems, search for “Environment Variables” in the search bar. Locate the PATH variable under either User Variables or System Variables, click Edit and then select New and input the desired path. `%USERPROFILE%\.local\bin` With pip: `pip install uv` With pipx: `pipx install uv` With Homebrew: `brew install uv` With Pacman: `pacman -S uv` After the installation, run the uv command in the terminal to verify that it has been installed correctly.--DIVIDER--# Creating Virtual Environment Creating a virtual environment with uv is very simple. Use the following command with the desired name to create it. `uv venv` To activate the virtual environment, run the following commands: - For Linux and macOS: `source .venv/bin/activate` - For Windows: `.venv\Scripts\activate` --DIVIDER--# Installing Packages To install packages for the virtual environment, follow a familiar process as shown below: - `uv pip install flask` use this command to install the Flask Framework - `uv pip install -r requirements.txt` use this command to install all the dependencies listen in the requirements.txt file. - `uv pip install -e ` use this to install the current project in editable mode, allowing changes to be reflected without reinstalling. - `uv pip install "package @ ."` use this to install the current project from the local disk - `uv pip install "flask[dotenv]"` use this to install Flask along with the additional "dotenv" functionality.--DIVIDER--# Initializing a New Project using UV To initialize a new project with UV, first create a directory for your new project by running the command `mkdir “project_name”` and then navigate into it using the `cd` command. After creating the project directory, you can initialize the project with uv by running a command `uv init`. This will create necessary files like `requirements.txt` or other configuration files required for your project. Once the project is initialized, you can install any required dependencies by running `uv pip install -r requirements.txt`. Then set up any necessary project files, depending on the framework you’re using. Finally, once your project is set up, you can start it by running `uv`. --DIVIDER--# Managing Dependencies with UV UV simplifies the process of creating virtual environments and installing dependencies with a single command, `uv add`. For Example: When the `uv add` command is executed for the first time, UV creates a new virtual environment in the current directory and installs the specified dependencies. For subsequent commands, UV reuses the existing environment and installs or updates the requested packages, making dependency management efficient. Every time you run the `uv add` command, UV also resolves dependencies. Using its modern dependency resolver, UV analyses the entire dependency graph to identify a compatible set of package versions that fulfil all requirements. The resolver accounts for factors such as version constraints, Python version compatibility, and platform-specific requirements to determine the best set of packages to install. After running the `uv add command`, UV updates both the `pyproject.toml` and `uv.lock` files. Here’s an example of a `pyproject.toml` file after installing Scikit-learn and XG-Boost: To remove a dependency, you can use the `uv remove` command. This uninstalls the specified package along with any dependencies it introduced. This streamlined approach to managing dependencies ensures an efficient and conflict-free environment.--DIVIDER--# Executing Python Scripts with UV After Installation of necessary dependencies, you can start writing Python scripts as usual. UV provides different ways to run Python code. To run a script directly, you can use the `uv run` command after your script name instead of the traditional `python script.py` syntax: `$ uv run hello.py` --DIVIDER--# Using command line tools with UV UV simplifies working with Python packages that provide command-line tools, such as “black” for code formatting, `flake8` for testing and `mypy` for type checking. If offers two interfaces for managing these tools. 1. Running tools with `uv tool run` : This interface allows you to execute command-line tools directly. When you run a command like `uv tool run <tool>`, UV creates a temporary virtual environment in its cache, installs the specified tool, and executes it from that cached environment. 2. Using the `uvx` command: Similarly, when you run a command via uvx, UV sets up a temporary virtual environment, installs the required tool, and runs it without polluting your project's primary virtual environment. This approach keeps your project’s dependencies clean while providing fast execution times, since the tools are managed separately in a cached environment rather than being installed directly into your project's environment. --DIVIDER--# Key Features of UV Tool Interface - Compatible with any Python package that provides command-line tools, such as flake8, mypy, black, or pytest. - Cached environments are automatically removed when UV’s cache is cleared. - New cached environments are created on-demand as required. - Ideal for occasional use of development tools without cluttering project dependencies. --DIVIDER--# Lock Files Lock files `uv.loc` are an essential part of dependency management in UV. When you run `uv add` commands to install dependencies, UV automatically generates and updates a `uv.lock` file. The file serves several important functions like: - It captures the exact version of all installed dependencies and their sub-dependencies. - It ensures reproducible builds by “locking” dependency versions across different systems and environments. - It minimizes the risk of “dependency hell” by maintaining consistent package versions. - It speeds up installation since UV can use the locked versions instead of solving the dependencies again. The management of the lock file is entirely automated, so manual edits are unnecessary. To ensure consistent environments for all collaborators, the “uv.lock” file should always be included in version control. --DIVIDER--# Difference between Lock Files and requirement.txt Lock files and requirements.txt serve similar purposes in tracking dependencies but differ in their details and use cases. Lock files contain detailed information about exact package versions and their complete dependency tree, ensuring consistent environments across development. requirements.txt files are simpler, typically listing only direct dependencies, making it more suitable for deployment scenarios or for sharing code with users who may not be using UV. These files are often required for compatibility with external tools and services that do not recognize UV’s lock file format. While lock files are indispensable for maintaining reliable build during development, requirements.txt is more appropriate when distributing or deploying in environments where UV-specific features are unavailable. Both formats complement each other in managing dependencies effectively. --DIVIDER--![Blue Gradient Modern Freelancer YouTube Thumbnail .png](Blue%20Gradient%20Modern%20Freelancer%20YouTube%20Thumbnail%20.png) # UV vs PIP PIP has been the standard tool managing Python packages and creating virtual environments. While it is effective, UV provides advantages that make it a compelling alternative. Here are some of them: - Speed: Developed with Rust, UV is much faster than PIP for package installation and dependency resolution, completing tasks in seconds that might take minutes in PIP. - Integrated Environment Management: Unlike virtualenv which focuses solely on environment creation, and PIP which manages package management, UV combines both the functionalities into a single tool, simplifying the development workflow. UV maintains full compatibility with PIP’s ecosystem while addressing some of its limitations. It supports the same requirements.txt files and package indexes making the transition to UV simple and effortless The key differences include: - Performance: UV’s parallel downloads and optimized dependency resolvers make it 10-100x faster than PIP for larger projects. - Memory Efficiency: During package installation and dependency resolution UV consumes significantly less memory when compared to PIP. - Enhanced Error Handling: UV provides clearer error messages and better conflict resolution when dependencies clash. - Reproducibility: UV’s lockfile mechanism ensures consistent environments across different systems, addressing a limitation of standard requirements.txt files Although PIP remains a reliable choice, UV’s modern design, enhanced performance, and integrated features provide developers with a more efficient and streamlined workflow. Its ability to integrate seamlessly into existing projects without disrupting current processes makes UV an excellent option. --DIVIDER--# UV vs Poetry UV also promises many of the same benefits as Poetry like: - Dependency Management: Both tools excel at handling package dependencies and creating virtual environments. - Project Structure: They provide utilities for initializing and organizing Python projects. - Lock Files: Both generate lock files to ensure consistent and reproducible environments across systems. - Package Publishing: They support publishing Python packages to PyPl. - Modern Tooling: Both represent contemporary approaches to Python projects and dependency management. What sets UV apart, is its extraordinary speed and minimal resource usage. While Poetry is a major step forward compared to traditional tools, UV pushes the boundaries even further with its Rust based implementation. Additionally, UV’s compatibility with existing Python packaging standards allows it to work seamlessly alongside tools like pip. This offers flexibility that Poetry’s more rigid approach doesn’t always provide. --DIVIDER--# UV vs Conda Many developers who avoid using PIP and virtualenv often turn to Conda, and for good reasons: - Conda offers package management solution that handles not only Python packages but also system-level dependencies. - It is effective for managing complex and scientific computing environments, supporting libraries like NumPy, SciPy, and TensorFlow. - Conda environments are highly isolated and ensure reproducibility across various operating systems. However, even dedicated Conda users might find compelling reasons to explore UV. With its exceptionally fast package installation and dependency resolution, UV significantly reduces the time needed to set up environments compare to Conda’s often slower performance. UV’s lightweight design translates to lower memory usage and faster startup times. Additionally, UV integrates with existing Python packaging tools and standards, ensuring compatibility with broader Python ecosystems. For projects that don't require Conda's non-Python package management, UV provides a more streamlined, efficient solution that can significantly improve development workflows. --DIVIDER--# Switching from PIP or Virtualenv to UV ![Blue Gradient Modern Freelancer YouTube Thumbnail (1).png](Blue%20Gradient%20Modern%20Freelancer%20YouTube%20Thumbnail%20%20(1).png) Migrating from PIP and virtualenv to UV is a simple process since UV maintains full compatibility with existing Python packaging standards. If you have an existing project using virtualenv and pip, start generating a requirements.txt file from your current environment. This can be done by the following command: `$ pip freeze > requirements.txt` Next create a new UV project in the same directory and then install the dependencies from your requirements.txt file: `$ uv init .` After setting up your UV environment, you can replace the common pip and virtualenv commands with their UV equivalents: `$ uv pip install -r requirements.txt` Once the migration is complete, you can safely remove the old virtualenv directory and start using UV’s virtual environment management. The transition should be smooth, you can also use the pip commands through UV’s pip compatibility layer.--DIVIDER--# Current Limitations While UV offers a fast and efficient solution for Python package management, it does have some limitations. One of the main challenges is its incomplete pip compatibility. Although UV supports a significant portion of the pip interface, it does not yet cover the entire feature set. Some of these limitations are due to the intentional design choices, while others are a result of UV being in its early stage of development. For a detailed comparison, you can also refer to the pip compatibility guide. Another limitation is the platform-specific requirements.txt files. Similar to `pip-compile`, UV generates platform specific `requirements.txt` files, which can cause issues when trying to transfer them across different platforms or Python environments. This differs from tools like `Poetry` and `PDM`, which create platform-agnostic lock files (e.g., `poetry.lock` or `pdm.lock`). As a result, UV’s `requirements.txt` files may not be as portable across different environments as those generated by other tools. --DIVIDER--# Conclusion UV presents a modern advancement in Python package management, offering a fast and efficient alternative to traditional tools like PIP and virtualenv. It has key advantages such as 10-100x faster performance, integration with Python packaging standards, a built-in virtual environment, efficient dependency resolution, and low memory footprint. UV greatly enhances the workflow. Whether you are starting a new project or migrating, UV provides a robust solution that improves efficiency while maintaining compatibility with existing tools. With continuous advancements in Python ecosystems, UV demonstrates how modern technologies like Rust can enhance the development experience without compromising the simplicity that Python developers appreciate. --DIVIDER--# References 1. [uv](https://github.com/astral-sh/uv): Python environment and package manager. 2. [PIP](https://pypi.org/project/pip/): Python package installer. 3. [Conda](https://github.com/conda/conda): Cross-platform, language-agnostic binary package manager. 4. [Poetry](https://python-poetry.org/): Python packaging and dependency manager.
4SAKUg8ciBuV
ready-tensor
cc-by-sa
Image compression with Auto-Encoders
![hero.png](hero.png)--DIVIDER--# Introduction to Auto-Encoders In the field of data compression, traditional methods have long dominated, ranging from lossless techniques such as ZIP file compression to lossy techniques like JPEG image compression and MPEG video compression. These methods are typically rule-based, utilizing predefined algorithms to reduce data redundancy and irrelevance to achieve compression. However, with the advent of advanced machine learning techniques, particularly Auto-Encoders, new avenues for data compression have emerged that offer distinct advantages over traditional methods in certain contexts. Auto-encoders are a class of neural network designed for unsupervised learning of efficient encodings by compressing input data into a condensed representation and then reconstructing the output from this representation. The primary architecture of an auto-encoder consists of two main components: an encoder and a decoder. The encoder compresses the input into a smaller, dense representation in the latent space, and the decoder reconstructs the input data from this compressed representation as closely as possible to its original form. --DIVIDER--![auto-encoder.png](auto-encoder.png) --DIVIDER--# Advantages Over Traditional Compression The flexibility and learning-based approach of Auto-Encoders provide several benefits over traditional compression methods: - **Adaptability**: Unlike traditional methods that rely on fixed algorithms, Auto-Encoders can learn from data, adapting their parameters to optimize for specific types of data or applications. This adaptability makes them particularly useful for complex data types for which traditional compression algorithms may not be optimized, such as high-dimensional data or heterogeneous datasets. - **Feature Learning**: Auto-Encoders are capable of learning to preserve important features in the data while still achieving compression. This is especially beneficial in domains like medical imaging or scientific data analysis, where preserving specific features can be more important than minimizing storage space or transmission bandwidth. - **Lossy Compression with Controlled Degradation**: Auto-Encoders offer lossy compression with adjustable quality. By tuning the network architecture and training parameters, we can balance compression ratio against reconstruction quality. This flexibility allows for fine-grained control over information loss, unlike many traditional methods which often have fixed or limited preset options for quality-compression trade-offs. - **Non-Linear Compression**: Unlike traditional algorithms such as Principal Component Analysis (PCA) or Singular Value Decomposition (SVD) that perform linear transformations, Auto-Encoders can model complex, non-linear relationships in the data. This capability allows for more efficient compression schemes that better capture the underlying data structure. - **Scalability**: Auto-Encoders offer excellent scalability for large datasets. Once trained, they can compress new data points quickly, with encoding time typically scaling linearly with input size. This makes them well-suited for applications involving high-volume data or real-time compression needs. Additionally, Auto-Encoders can be implemented efficiently on GPUs, further enhancing their performance on large-scale tasks. --DIVIDER--# Exploring Compression Capabilities of Auto-Encoders In the notebook included in the **Resources** section, an experimental framework is set up to investigate the compression capabilities of Auto-Encoders using the MNIST dataset. MNIST, a common benchmark in machine learning, consists of 60,000 grayscale images in 10 classes of size 28x28, providing a diverse range of handwritten digits for evaluating model performance. # Methodology For the image compression task, we utilize a convolutional autoencoder, leveraging the spatial hierarchy of convolutional layers to efficiently capture the patterns in image data. The autoencoder's architecture includes multiple convolutional layers in the encoder part to compress the image, and corresponding deconvolutional layers in the decoder part to reconstruct the image. The model is trained with the objective of minimizing the mean squared error (MSE) between the original and reconstructed images, promoting fidelity in the reconstructed outputs. # Experimental Setup The notebook details a systematic exploration of different sizes of the latent space, ranging from high-dimensional to low-dimensional representations. The goal is to understand how the dimensionality of the latent space affects both the compression percentage and the quality of the reconstruction. The compression percentage is calculated based on the ratio of the dimensions of the latent space to the original image dimensions, while the reconstruction error is measured using the MSE. We explore 4 scenarios of compression: 50%, 90%, 95% and 99%. --DIVIDER--# Results ## Original vs Reconstructed Images Let's examine a sample of images to visualize how the size reduction in the latent space affects the quality of reconstructed images: ![compressed-images.png](compressed-images.png) As we increase the compression ratio, we observe: 1. Increasing blur in reconstructed images 2. At 99% compression: - Digit "2" starts resembling an "8" - Digit "4" looks like a "9" 3. Most digits remain recognizable until extreme compression. This highlights the trade-off between compression efficiency and image fidelity. --DIVIDER--## Compression ratio vs MSE Loss We now examine the relationship between compression ratio and reconstruction loss (MSE). Specifically, as the latent space is reduced, achieving higher compression percentages, the reconstruction error initially remains low, indicating effective compression. However, a marked increase in reconstruction error is observed as the latent dimension is further reduced beyond a certain threshold . This suggests a boundary in the compression capabilities of the autoencoder, beyond which the loss of information significantly impacts the quality of the reconstructed images. --- ![reconstruction_error.png](reconstruction_error.png) -----DIVIDER--The chart below illustrates the reconstruction error for each digit at 95% and 99% compression rates. --- ![label_error.png](label_error.png) --- Our analysis reveals that the digit "1" shows the lowest reconstruction error, while digit "2" exhibits the highest error at 95% compression, and digit "8" at 99% compression. However, it's crucial to understand that these results don't account for the total amount of information each digit contains, often visualized as the amount of "ink" or number of pixels used to write it. The lower error for digit "1" doesn't necessarily mean it's simpler to represent in latent space. Rather, even if all digits were equally complex to encode per unit of information, digits like "2" or "8" would naturally accumulate more total error because they contain more information (more "ink" or active pixels). For a fairer comparison, we would need to normalize the error by the amount of information in each digit. For instance, if we measured error per 100 pixels of "ink", we might find that the relative complexity of representing each digit in the latent space is more similar than the raw error suggests.--DIVIDER--## Comparing Distributions Using t-SNE Below is a scatter plot that visualizes the distribution of original images (blue points) and their reconstructed counterparts (red points) using t-SNE. This visualization allows us to compare the high-dimensional structure of the original and reconstructed data in a 2D space. Key observations: 1. At lower compression ratios, the blue and red points significantly overlap, indicating that the reconstructed images closely match the distribution of the original images. 2. As we increase the compression to 99%, we begin to see some divergence between the original and reconstructed distributions: - The digit "1" shows the most noticeable separation between blue and red points at 99% compression, suggesting that this digit's reconstruction is most affected by extreme compression. - Digits 3, 7, 8, and 9 also exhibit slight divergences at this high compression level, though less pronounced than digit "1". 3. The degree of overlap between blue and red points serves as a visual indicator of reconstruction quality. Greater overlap suggests better preservation of the original data's structure, while separation indicates more significant information loss during compression. --- ![tsne.png](tsne.png) -----DIVIDER--:::info{title="Info"} ## Regarding t-SNE t-SNE (t-distributed Stochastic Neighbor Embedding) is a popular technique for visualizing high-dimensional data in two or three dimensions. It's particularly effective at revealing clusters and patterns in complex datasets. t-SNE works by maintaining the relative distances between points in the original high-dimensional space when projecting them onto a lower-dimensional space. This means that points that are close together in the original data will tend to be close together in the t-SNE visualization, while distant points remain separated. This property makes t-SNE especially useful for exploring the structure of high-dimensional data, such as images or word embeddings, in a more interpretable 2D or 3D format. </br></br> In this tutorial, we're using t-SNE to compare the distributions of original images and their autoencoder reconstructions. By plotting both sets of data points on the same t-SNE chart (using different colors, e.g., blue for originals and red for reconstructions), we can visually assess the quality of the reconstruction. If the autoencoder is performing well, the blue and red points should significantly overlap, indicating that the original and reconstructed data have similar distributions. Conversely, if the points are clearly separated, it suggests that the reconstructions differ significantly from the originals, pointing to potential issues with the autoencoder's performance. </br></br> One might wonder why t-SNE, which can effectively reduce high-dimensional data to two or three dimensions for visualization, isn't directly used for data compression. There are two major limitations that make t-SNE unsuitable for this purpose: 1. Computational Complexity: t-SNE has a time complexity of O(n²), where n is the number of data points. This quadratic scaling makes it computationally expensive and impractical for large datasets. 2. Non-Parametric Nature: t-SNE doesn't learn a parametric mapping between the high-dimensional and low-dimensional spaces. This means it can't directly transform new, unseen data points without recomputing the entire embedding. These limitations highlight why we use purpose-built compression techniques, such as Auto-Encoders, which offer better scalability and can efficiently process new data once trained. :::--DIVIDER--# Summary This publication investigated the efficacy of autoencoders as a tool for data compression, with a focus on image data represented by the MNIST dataset. Through systematic experimentation, we explored the impact of varying latent space dimensions on both the compression ratio and the quality of the reconstructed images. The primary findings indicate that autoencoders, leveraging their neural network architecture, can indeed compress data significantly while retaining a considerable amount of original detail, making them superior in certain aspects to traditional compression methods.
57Nhu0gMyonV
ready-tensor
mit
Building CLIP from Scratch: A Tutorial on Multi-Modal Learning
![hero-image.png](hero-image.png)--DIVIDER--# Abstract This work provides a comprehensive implementation of Contrastive Language-Image Pretraining (CLIP) from the ground up. CLIP, introduced by OpenAI, jointly trains image and text encoders using contrastive learning to align visual and textual representations in a shared embedding space. This tutorial details the architectural design, including the use of transformer-based models for text encoding and convolutional neural networks for image encoding, as well as the application of contrastive loss for training. The resulting implementation offers a clear, reproducible methodology for understanding and constructing CLIP models, facilitating further exploration of multi-modal learning techniques.--DIVIDER--# Introduction Contrastive Language-Image Pretraining (CLIP) is a pioneering multi-modal model introduced by OpenAI that bridges the gap between visual and textual understanding. By jointly training an image encoder and a text encoder, CLIP learns to align these two modalities in a shared embedding space, enabling it to perform tasks such as zero-shot image classification, image search by textual queries, and text generation based on visual content. This alignment is achieved through contrastive learning, where the model is trained to associate corresponding image-text pairs while distinguishing them from unrelated pairs. The key innovation of CLIP lies in its ability to generalize across a wide range of visual and textual inputs without requiring task-specific fine-tuning. This is particularly useful in open-ended scenarios where the model is expected to handle diverse, unseen data. Traditional models often require large labeled datasets and are constrained to specific tasks. In contrast, CLIP can be trained on uncurated, web-scale datasets containing image-text pairs, making it highly flexible and applicable in various domains, from content retrieval to creative generation. The usefulness of CLIP extends beyond its impressive performance on standard vision tasks. It provides a scalable approach to multi-modal learning, where text can be leveraged to guide image understanding in more abstract ways, and vice versa. This makes it a powerful tool for applications in fields like computer vision, natural language processing, and even human-computer interaction, where cross-modal relationships are essential. In this tutorial, the focus will be on implementing CLIP from scratch, offering insights into its architecture and training process. This implementation provides a hands-on exploration of the core principles of multi-modal contrastive learning, highlighting CLIP’s versatility and effectiveness in real-world applications.--DIVIDER--# CLIP Architecture CLIP employs a dual-encoder architecture that processes images and text separately but aligns their representations in a shared embedding space. The model consists of two key components: an **image encoder** and a **text encoder**. These encoders operate independently to produce embeddings for their respective inputs, which are then compared using a contrastive loss function to learn meaningful correspondences between images and their associated textual descriptions. ![clip-overview.png](clip-overview.png)--DIVIDER--## Image Encoder The image encoder in CLIP is responsible for converting images into high-dimensional embeddings that capture meaningful visual features. These embeddings are then aligned with text embeddings through a shared space, allowing the model to learn relationships between images and textual descriptions. The image encoder is flexible and can be built using different architectures, with **ResNet** and **Vision Transformers (ViT)** being the most commonly used. Both of these architectures can be employed in CLIP to encode visual information effectively. The choice of image encoder depends on the complexity and scale of the task, as well as the type of image data being used. ResNet tends to work well for standard image recognition tasks, while ViT excels in capturing more abstract relationships within images. ![img-encoder2.png](img-encoder2.png) <h2> Image Encoder Architecture</h2> The image encoder in this implementation is inspired by the Vision Transformer (ViT) architecture, which processes images as sequences of patches, allowing it to capture relationships across different regions of an image efficiently. 1. **Patch Embedding**: The first step in the image encoding process is to divide the input image into small, fixed-size patches (in this case, 16x16 pixels). Each patch is treated as an individual token, similar to words in a text sequence. These patches are then linearly projected into a higher-dimensional space (768 dimensions), effectively converting the image into a series of patch embeddings. This process ensures that the model can process and understand each part of the image separately. 2. **Positional Embedding**: Since transformers are sequence models and do not inherently have any notion of spatial relationships, positional embeddings are added to each patch embedding. These positional embeddings provide information about the relative position of each patch in the original image, ensuring that the model can account for spatial arrangement while processing the image. ```python class ImageEmbeddings(nn.Module): def __init__( self, embed_dim: int = 768, patch_size: int = 16, image_size: int = 224, num_channels: int = 3, ): super(ImageEmbeddings, self).__init__() self.embed_dim = embed_dim self.patch_size = patch_size self.image_size = image_size self.num_channels = num_channels self.patch_embedding = nn.Conv2d( in_channels=self.num_channels, out_channels=self.embed_dim, kernel_size=self.patch_size, stride=self.patch_size, padding="valid", ) self.num_patches = (self.image_size // self.patch_size) ** 2 self.position_embedding = nn.Embedding(self.num_patches, self.embed_dim) self.register_buffer( "position_ids", torch.arange(self.num_patches).expand((1, -1)), persistent=False, ) def forward(self, x: torch.Tensor) -> torch.Tensor: # x: (Batch size, Channels, Height, Width) -> (Batch size, Embed dim, Height, Width) x = self.patch_embedding(x) # x: (Batch size, Embed dim, Height, Width) -> (Batch size, Height * Width, Embed dim) x = x.flatten(2).transpose(1, 2) # Add position embeddings x = x + self.position_embedding(self.position_ids) return x ``` 3. **Self-Attention Mechanism**: Once the image has been converted into a series of patch embeddings with positional information, a multi-head self-attention mechanism is applied. In this context, since each patch can attend to all other patches in the image, no masking is required, unlike in tasks such as language modeling where padding or causal masking may be necessary. The attention mechanism enables the model to weigh the importance of different patches relative to each other, allowing it to focus on significant regions of the image. This setup captures both local and global interactions across patches, and the use of multiple heads enables the model to learn various relationships in parallel, enriching the understanding of the image’s structure. ```python class Attention(nn.Module): def __init__( self, embed_dim: int = 768, num_heads: int = 12, qkv_bias: bool = False, attn_drop_rate: float = 0.0, proj_drop_rate: float = 0.0, ): super(Attention, self).__init__() assert ( embed_dim % num_heads == 0 ), "Embedding dimension must be divisible by number of heads" self.num_heads = num_heads head_dim = embed_dim // num_heads self.scale = head_dim**-0.5 self.wq = nn.Linear(embed_dim, embed_dim, bias=qkv_bias) self.wk = nn.Linear(embed_dim, embed_dim, bias=qkv_bias) self.wv = nn.Linear(embed_dim, embed_dim, bias=qkv_bias) self.attn_drop = nn.Dropout(attn_drop_rate) self.wo = nn.Linear(embed_dim, embed_dim) self.proj_drop = nn.Dropout(proj_drop_rate) def forward(self, x: torch.Tensor) -> torch.Tensor: # x: (Batch size, Num patches, Embed dim) batch_size, n_patches, d_model = x.shape q = ( self.wq(x) .reshape(batch_size, n_patches, self.num_heads, d_model // self.num_heads) .transpose(1, 2) ) k = ( self.wk(x) .reshape(batch_size, n_patches, self.num_heads, d_model // self.num_heads) .transpose(1, 2) ) v = ( self.wv(x) .reshape(batch_size, n_patches, self.num_heads, d_model // self.num_heads) .transpose(1, 2) ) attn = (q @ k.transpose(-2, -1)) * self.scale attn = attn.softmax(dim=-1) attn = self.attn_drop(attn) x = (attn @ v).transpose(1, 2).reshape(batch_size, n_patches, d_model) x = self.wo(x) x = self.proj_drop(x) return x ``` 4. **Feed-Forward Network (MLP)**: After the attention mechanism, the patch embeddings pass through a multi-layer perceptron (MLP). This feed-forward network processes each patch embedding individually, helping the model to further refine the visual features extracted from the image. It consists of two linear layers with a non-linear activation function in between, followed by dropout to prevent overfitting. ```python class MLP(nn.Module): def __init__( self, in_features: int, hidden_features: int, drop_rate: float = 0.0, ): super(MLP, self).__init__() self.fc1 = nn.Linear(in_features, hidden_features) self.act = nn.GELU() self.fc2 = nn.Linear(hidden_features, in_features) self.drop = nn.Dropout(drop_rate) def forward(self, x: torch.Tensor) -> torch.Tensor: # x: (Batch size, Num patches, Embed dim) x = self.fc1(x) x = self.act(x) x = self.drop(x) x = self.fc2(x) x = self.drop(x) return x ``` 5. **Layer Normalization and Residual Connections**: To stabilize training and improve performance, layer normalization is applied before both the attention and MLP layers. Additionally, residual connections are employed, where the input to each block is added to the block’s output, allowing the model to retain information from earlier layers and avoid vanishing gradients. These techniques improve the model’s ability to learn efficiently, even with deep architectures. <h2>Image Encoder Layer</h2> ```python class ImageEncoderLayer(nn.Module): def __init__( self, embed_dim: int = 768, num_heads: int = 12, mlp_ratio: int = 4, qkv_bias: bool = False, drop_rate: float = 0.0, attn_drop_rate: float = 0.0, ): super(ImageEncoderLayer, self).__init__() self.norm1 = nn.LayerNorm(embed_dim, eps=1e-6) self.attn = Attention( embed_dim=embed_dim, num_heads=num_heads, qkv_bias=qkv_bias, attn_drop_rate=attn_drop_rate, proj_drop_rate=drop_rate, ) self.norm2 = nn.LayerNorm(embed_dim) self.mlp = MLP( in_features=embed_dim, hidden_features=int(embed_dim * mlp_ratio), drop_rate=drop_rate, ) def forward(self, x: torch.Tensor) -> torch.Tensor: # x: (Batch size, Num patches, Embed dim) residual = x x = self.norm1(x) x = residual + self.attn(x) residual = x x = self.norm2(x) x = residual + self.mlp(x) return x ``` This architecture provides the flexibility to learn both fine-grained details and abstract patterns across images, making it effective for encoding visual information in multi-modal tasks like CLIP. The combination of patch embeddings, attention, and feed-forward networks allows the model to understand and represent images in a way that can be directly compared to text embeddings. --DIVIDER--## Text Encoder The text encoder in CLIP is responsible for converting input text into a fixed-dimensional embedding that can be aligned with image embeddings. CLIP can use various transformer-based models like **BERT** or **GPT** as its text encoder. These models tokenize the input text, turning each word or subword into an embedding vector that captures semantic meaning. To handle word order, **positional encodings** are added to these token embeddings, ensuring the model understands the structure of the sentence. A **multi-head self-attention** mechanism then allows each token to attend to all others in the sequence, capturing both local and global dependencies in the text. Finally, the output is refined through a **feed-forward network**, with **layer normalization** and **residual connections** applied to stabilize training and maintain information across layers. This architecture ensures the model generates high-quality embeddings that represent the meaning of the text, ready to be aligned with the corresponding image embeddings. In this implementation, we chose GPT-2 as our text encoder: ```python configuration = GPT2Config( vocab_size=50257, n_positions=max_seq_length, n_embd=embed_dim, n_layer=num_layers, n_head=num_heads, ) self.text_encoder = GPT2Model(configuration) ```--DIVIDER--## Data Fusion Once the image and text inputs have been encoded separately by their respective encoders, CLIP projects both modalities into a shared embedding space. This process, known as **data fusion**, allows the model to align visual and textual representations so that they can be directly compared. To achieve this, both the image and text embeddings are passed through a **projection layer** that maps them into the same dimensional space. By doing so, the model can compute similarities between images and text, enabling it to link corresponding image-text pairs and differentiate between unrelated ones. This shared space is crucial for tasks like zero-shot image classification and cross-modal retrieval, where the model must understand and relate visual and textual information in a unified way. ```python self.image_projection = nn.Linear(img_embed_dim, embed_dim) self.text_projection = nn.Linear(embed_dim, embed_dim) ``` --DIVIDER--# Contrastive Loss CLIP’s training process relies on **contrastive learning**, which is designed to align image and text embeddings by maximizing the similarity between matched pairs while minimizing it for mismatched pairs. This is achieved through the use of a **contrastive loss** function, which encourages the model to bring together the embeddings of corresponding images and text in the shared space. During training, the model is given a batch of image-text pairs. For each pair, the model computes similarities between the image embedding and all the text embeddings in the batch, as well as between the text embedding and all the image embeddings. The goal is to maximize the similarity for the correct image-text pair and minimize it for all incorrect pairs. This encourages the model to learn meaningful correspondences between images and descriptions, ensuring that related images and text are positioned closely in the embedding space, while unrelated pairs are pushed apart. ![clip-loss-1.png](clip-loss-1.png) The contrastive loss can be implemented as follows: ```python def clip_loss(image_embeddings, text_embeddings): # Normalize embeddings image_embeddings = F.normalize(image_embeddings, dim=-1) text_embeddings = F.normalize(text_embeddings, dim=-1) # Compute logits by multiplying image and text embeddings (dot product) logits_per_image = image_embeddings @ text_embeddings.T logits_per_text = text_embeddings @ image_embeddings.T # Create targets (diagonal is positive pairs) num_samples = image_embeddings.shape[0] labels = torch.arange(num_samples, device=image_embeddings.device) # Compute cross-entropy loss for image-to-text and text-to-image directions loss_image_to_text = F.cross_entropy(logits_per_image, labels) loss_text_to_image = F.cross_entropy(logits_per_text, labels) # Final loss is the average of both directions loss = (loss_image_to_text + loss_text_to_image) / 2.0 return loss ``` --DIVIDER--# Solving Multiple Choice Questions <h2>Model Training and Evaluation on Image-Based MCQ Task</h2> One of the practical use cases for CLIP-like models is solving multiple-choice questions (MCQs) where the question is an image and the answer options are in text form. This setup highlights CLIP’s ability to bridge visual and textual data, aligning image features with corresponding text descriptions to select the most relevant answer. To train the model for this type of task, we used the [Attila1011/img_caption_EN_AppleFlair_Blip](https://huggingface.co/datasets/Attila1011/img_caption_EN_AppleFlair_Blip) dataset from Hugging Face. This dataset contains pairs of images and corresponding captions, making it ideal for training models that require aligned image-text data, such as CLIP. By learning the associations between diverse visual inputs and their textual descriptions, the model can effectively map images to related text in a shared embedding space, a key component in contrastive learning frameworks. The diverse nature of the images and captions in this dataset allows the model to generalize well across various visual scenes and their textual counterparts. This ensures that the model can capture a wide range of image-text relationships, which is critical for tasks involving open-ended or unseen data, such as solving MCQs where new image-based questions are presented. After training on this dataset, the model was evaluated on a multiple-choice question (MCQ) dataset where it was tasked with selecting the correct text-based answer for each image. Below, we provide an example visualization, showing the images from the MCQ dataset, the model's answer choices, and its selected answer. ![mcq1.png](mcq1.png)--DIVIDER--# Conclusion In this work, we provided a detailed walkthrough of implementing Contrastive Language-Image Pretraining (CLIP) from scratch, covering both the architectural design and the training process. By leveraging contrastive learning, the model effectively aligns image and text embeddings in a shared space, enabling it to generalize across various multi-modal tasks without the need for task-specific fine-tuning. We demonstrated the versatility of CLIP through its ability to handle both visual and textual information, and further evaluated its performance on a multiple-choice question (MCQ) dataset. This implementation highlights the powerful capabilities of CLIP in multi-modal learning, laying the foundation for future exploration in fields such as computer vision, natural language processing, and cross-modal retrieval.--DIVIDER--# References Radford, Alec, et al. “[Learning Transferable Visual Models From Natural Language Supervision.](https://arxiv.org/pdf/2103.00020)” International Conference on Machine Learning (ICML), 2021. [Attila1011/img_caption_EN_AppleFlair_Blip Dataset](https://huggingface.co/datasets/Attila1011/img_caption_EN_AppleFlair_Blip)
82lYI7TWVtvP
3rdson
cc-by
Core concepts of Agentic AI and AI agents
![AIClips675547-1024x585.png](AIClips675547-1024x585.png) Over the past year, there has been immense hype and discussion around AI, particularly **GenAI**, **Agentic AI**, and **RAG systems**. This buzz has sparked significant shifts across industries, with everyone scrambling to understand: *What exactly are agents? What defines "agentic AI"?* How do we distinguish an AI system as "agentic" versus a non-agentic tool? We’ve seen companies racing to adopt AI, startups pitching "agents-as-a-service," and a flood of new frameworks. But amid the noise, the fundamentals often get lost. That’s exactly why we’re breaking it all down in this article. In this article, we will be **explaining Agentic AI, AI agents, and recent GenAI trends** in the simplest way possible. Here’s what we’ll cover: 1. **Agentic AI** – What makes it revolutionary? 2. **AI Agents** – Core components that define them 3. **LLM Frameworks & Workflows** – The engine behind the magic We’ll also unpack key concepts like: - **Memory & Context Management** (How agents "remember") - **Prompt Engineering** (How to instruct AI Agents) - **Multi-Agent Communication** (When agents team up) - **Real-World Applications** (Where agents *actually* shine today) Stick with us until the end, we’ll make sure you walk away with clarity on how these pieces fit into the bigger AI landscape. ![gen AI pub for all.webp](gen%20AI%20pub%20for%20all.webp) ## **So, What Is Agentic AI?** To understand this, let's start with the basics: **What are AI agents?** **AI agents** are systems powered by AI (typically LLMs) that interact with software, data, *and even hardware* to achieve specific goals. Think of them as proactive problem-solvers: they autonomously complete tasks, make decisions, and adapt to new information with no micromanaging required. It's crucial to note that what makes a system truly "agentic" goes beyond just behavior, the implementation matters too. This is because traditional automated systems using if-else logic can mimic agent-like behavior, but true AI agents are distinguished by how their decisions are made. Instead of following pre-programmed conditional logic, they use LLMs to actively make decisions and determine their course of action. This fundamental difference in implementation (LLM-driven decision making versus traditional programming logic) is what sets genuine AI agents apart from sophisticated automation. Unlike basic chatbots or static AI tools, agents **plan** and **decide** independently (guided by the user's input) until they nail the best result. But how do they achieve this? They achieve this through their brain(LLM). The LLM lets them: 1. **Observe** their environment (e.g., data inputs, user requests). 2. **Orient** themselves to understand how to use their tools. 3. **Decide** on the optimal action. 4. **Execute** that action. In short, they’re *goal-driven, self-directed systems* and Agentic AI is the field focused on building and refining these autonomous agents. --- ## **What’s an Autonomous Agent?** An autonomous agent is an advanced form of AI that can understand and respond to enquiries and then take action **without human intervention**. When given an objective, it can: - Generate tasks for itself, - Complete those tasks. - Move to the next one. ...until the objective is fully achieved. --- ## **Autonomous Agents vs. AI Agents** While all autonomous agents are technically AI agents, **not all AI agents are autonomous**. Here’s the breakdown: - **AI agents** include assistive tools like copilots, which rely on human input to complete tasks. - **Autonomous agents** work independently, needing little to no human involvement. But note that both can learn and make decisions based on new information, but **only autonomous agents** can chain multiple tasks in sequence. --- ### **Let’s See an AI Agent in Action** Imagine an AI assistant that seamlessly manages tasks on your laptop like a virtual assistant built into your laptop. For example: - You ask it, *“Do I have any emails from Abhy?”* The agent **interprets** your request, **decides** to connect to your Gmail API, scans your inbox, and instantly pulls up every email from Abhy. - Or, *“What’s trending in NYT news today?”* The agent **recognises** it needs to search the web, crawls trusted sources (like NYT’s API), and spits out a bullet-point summary of key trends. This system is called an “agent” because at **every step**, it uses its brain (the LLM) to: 1. **Interpret** your goal 2. **Decide** which tools (email, web search, calendar) to use 3. **Execute** actions end-to-end Unlike basic chatbots that wait for step-by-step commands, AI agents **autonomously bridge intent and outcome**. They leverage tools, analyse context, and keep iterating until the job’s done. *Note*: This example is **software-based**, but there are also **hardware-based AI agents** (physical ones), like robots or self-driving cars. These use cameras, mics, and sensors to capture real-world data—then act on it (e.g., a warehouse robot navigating around obstacles). --- ## **Agentic AI** Now that we’ve nailed what agents are, agentic AI becomes straightforward. At it's core, **Agentic AI** is the *autonomy engine* for AI systems. It’s the intelligence and methodology that lets agents act independently, the “how” behind their ability to plan, decide, and execute without hand-holding. Think of it as the **framework** (and mindset) for building agents that truly “think for themselves.” --- ## **Core Components of AI Agents** We’ll break down the core components into two categories: ### **1. Architectural Components of AI Agents** These are the foundational building blocks in every AI agent’s design. They include: **1. Large Language Models (LLMs): The Brain** LLMs are the powerhouses behind AI agents similar to the human brain. They’re responsible for: - Understanding user input - Deciding which tools to use - Generating final answers after processing. **2. Tools Integration: The “Hands”** Tools let AI agents interact with the digital world. In our earlier example, tools included Gmail APIs and web crawlers which was used to fetch data from external sources. **3. Memory Systems: The “Recall”** Memory allows agents to retain and reuse information across interactions (think personalised context!). Without it, an agent is like a goldfish, forgetting every conversation instantly. ### AI agents can have any of the following memories: 1. **Short-term Memory:** Keeps track of the ongoing conversation, enabling the Al to maintain coherence within a single interaction. 2. **Long-term Memory:** Stores information across multiple interactions, allowing the Al to remember user preferences, past queries, and more. #### Long term memory can further be split into 3 types - Episodic Memory: Remembers specific past events or interactions, enabling the Al to recall and reference previous exchanges. - Semantic Memory: Holds general knowledge and facts that the Al can draw upon to provide informed response - Procedural. Memory: This is defined as anything that has been codified intuit the AI agent by us. It may include the structure of the system prompt, the tools we provide the agent etc ![1_l0oRfSsoJXjaexRmM3FQwg.png](1_l0oRfSsoJXjaexRmM3FQwg.png) --- ### **2. Cognitive Components of AI Agents** These define how agents “think” and act: **1. Perception: The “Senses”** This is the AI agent’s ability to gather and interpret data from its surroundings. Much like human senses, perception allows the agent to ‘see’ and ‘hear’ the world around it. In the AI agent example above, the agent displayed the perception ability by interacting with APIs, databases, or web services to gather relevant information. **2. Reasoning: The “Logic”** Once an AI agent has gathered the relevant data through perception, it needs to make sense of that data. This is where reasoning comes into play. Reasoning involves analysing the collected data, identifying patterns, and drawing conclusions. It’s the process that allows an AI agent to transform raw data into actionable insights **3. Action: The “Doing”** This is the ability of the AI agent to bring it's decision to life. The ability to take action based on perception and reasoning is what truly makes an AI agent autonomous. Actions can be physical, like a robot moving an object, or digital, such as a software agent sending an email. **4. Feedback & Learning: The “Growth”** One of the most fascinating aspect of AI agents is their ability to learn and improve over time. Learning allows AI agents to adapt to new situations, refine their decision-making processes, and become more efficient at their tasks. ![PHOTO-2025-02-19-16-10-59.jpg](PHOTO-2025-02-19-16-10-59.jpg) --- ## **Multi-Agent Systems (MAS)** Just like the name suggests, a **multi-agent system (MAS)** involves multiple AI agents teaming up to tackle tasks for a user or system. Instead of relying on one “do-it-all” agent, MAS uses a squad of specialised agents working together. Thanks to their flexibility, scalability, and domain expertise, MAS can solve **complex real-world problems**. --- ## **MAS Architectures** ### **1. Centralized Networks** In centralised networks, a central unit contains the global knowledge base, connects the agents, and oversees their information. A strength of this structure is the ease of communication between agents and uniform knowledge. A weakness of the centrality is the dependence on the central unit; if it fails, the entire system of agents fails. **Example**: Like a conductor in an orchestra, directing every musician. ### **2. Decentralized Networks** In a decentralised network, there is no central agent or unit that controls the oversees information. The agents share information with their neighbour in a decentralised manner. Some benefits of decentralised networks are robustness and modularity. The failure of one agent does not cause the overall system to fail since there is no central unit. One challenge of decentralised agents is coordinating their behaviour to benefit other cooperating agents. **Example**: A flock of birds adjusting flight paths without a leader. --- ## **Next Up: Building AI Agents** Now that we’ve covered *what* AI agents are and *how* they work, let’s tackle the big question: **How do you actually build one?** (And which frameworks/libraries make it easier?) --- ### **Building AI Agents: Frameworks & Tools** Python dominates the AI/ML world, so familiarity with it unlocks countless SDKs and frameworks. But even non-coders can build agents using **drag-and-drop GUI tools**. Let’s break down the options: --- ## **Code-Based Frameworks** For developers who want granular control over workflows, memory, and multi-agent collaboration: ### **Code-Based Frameworks** 1. **LangGraph** Developed by the team behind LangChain, LangGraph takes things further by letting you design AI workflows as *visual graphs*. Imagine building a customer support system where one agent handles initial queries, another escalates complex issues, and a third schedules follow-ups; all connected like nodes on a flowchart. It’s perfect for multi-step processes that need to "remember" where they are in a task. 🔗 [Docs](https://langchain-ai.github.io/langgraph/) | [GitHub](https://github.com/langchain-ai/langgraph) 2. **Microsoft AutoGen** AutoGen is Microsoft’s answer to collaborative AI. With Microsoft AutoGen, you can have a system where one agent writes code, another reviews it for errors, and a third tests the final script. These agents debate, self-correct, and even use tools like APIs or calculators. It is ideal for coding teams or research projects where multiple perspectives matter. 🔗 [Docs](https://microsoft.github.io/autogen/stable/) | [GitHub](https://github.com/microsoft/autogen) 3. **CrewAI** CrewAI organizes agents into specialized roles, like a startup team. For example, a "Researcher" agent scours the web for data, a "Writer" drafts a report, and an "Editor" polishes it. They pass tasks back and forth, refining their work until it’s ready to ship with no micromanaging required. 🔗 [Docs](https://docs.crewai.com/introduction) | [GitHub](https://github.com/crewAIInc/crewAI) 4. **LlamaIndex** Formerly called GPT Index, LlamaIndex acts like a librarian for your AI agents. If you need your agent to reference a 100-page PDF, a SQL database, and a weather API, LlamaIndex is the framework to go to. It helps it fetch and connect data from all these sources, ensuring responses are informed and accurate. 🔗 [Docs](https://docs.llamaindex.ai/en/stable/) | [GitHub](https://github.com/run-llama/llama_index) 5. **Pydantic AI** Built by the Pydantic team, this framework acts as a data validator for your AI workflows. If your agent interacts with APIs, Pydantic AI checks that inputs and outputs match the expected data format. Like ensuring a date field isn’t accidentally filled with text. No more "garbage in, garbage out" chaos. 🔗 [Docs](https://ai.pydantic.dev/) | [GitHub](https://github.com/pydantic/pydantic-ai) 6. **OpenAI Swarm** OpenAI’s experimental Swarm framework explores how lightweight AI agents can solve tasks collaboratively. One agent gathers data, another analyzes it, and a third acts on it. It’s not ready for production yet but it's worth mentioning. 🔗 [GitHub](https://github.com/openai/swarm) --- ### **Visual (GUI) Frameworks** 1. **Rivet** Rivet is like digital LEGO for AI. You just have to drag and drop nodes to connect ChatGPT to your CRM, add a "send email" action, and voilà, you’ve built an agent that auto-replies to customer inquiries. Perfect for business teams who want automation without coding. 🔗 [Website](https://rivet.ironcladapp.com/) 2. **Vellum** Vellum is the Swiss Army knife for prompt engineers. It allows you to test 10 versions of a prompt side-by-side, see which one gives the best results, and deploy it to your agent; all through a clean interface. It’s like A/B testing for AI workflows. 🔗 [Website](https://www.vellum.ai/) 3. **Langflow** Langflow is the drag and drop alternative to LangChain. You can just drag a "web search" node into your workflow, link it to a "summarize" node, and watch your agent turn a 10-article search into a crisp summary. It is great for explaining AI logic to your CEO. 🔗 [Website](https://www.langflow.org/) 4. **Flowise AI** Flowise AI is the open-source cousin of Langflow. You can use it to build a chatbot that answers HR questions by just linking your company handbook to an LLM—no coding, just drag, drop, and deploy. 🔗 [Website](https://flowiseai.com/) 5. **Chatbase** Chatbase lets you train a ChatGPT-like assistant on your own data. Upload your FAQ PDFs, tweak the design to match your brand, and embed it on your website. It’s like having a 24/7 customer service rep who actually reads the manual. 🔗 [Website](https://www.chatbase.co/) ### **Here are some Factors to consider before choosing a Framework** 1. **Use Case** What’s your agent’s job? A coding assistant needs AutoGen’s teamwork, while a document chatbot thrives with Langflow’s simplicity. 2. **Criticality** Are you building a mission-critical system? Opt for battle-tested tools like LangGraph. If experimenting, try experimental frameworks like Swarm. 3. **Team Skills** If you have Python pros in your team, then go for code-based frameworks but if don't have Python pros in your team, GUI tools like Rivet or Chatbase will save the day. 4. **Time/Budget** Need it yesterday? No-code tools speed things up. Got resources? Custom code offers long-term flexibility. 5. **Integration** If you would need to plug in connectors like Slack API, Jira API etc, check if the framework supports those connectors out-of-the-box. 6. **AIOps** If you will scale to a thousand users, prioritize frameworks with built-in monitoring, logging, and auto-scaling. --- ## **LLM Workflows: The “Conductor” Behind the Magic** LLM workflows are essentially a series of interconnected processes that ensure an AI system can understand user intent, maintain context, break down tasks, collaborate among agents, and ultimately deliver actionable results. Think of LLM workflows as the *recipe* your AI agents follow. Just like baking a cake requires mixing, baking, and frosting steps, LLM workflows chain prompts, tools, and logic into a sequence. For example, a customer support agent might: 1. **Analyze** a user’s complaint, 2. **Search** past tickets for similar issues, 3. **Draft** a response, and 4. **Escalate** if it’s urgent. Frameworks like LangGraph or Microsoft AutoGen let you orchestrate these steps like a playlist with less coding headaches. --- ## **Context Management: How Agents “Remember”** Context management is the mechanism by which an AI system keeps track of ongoing interactions and relevant data. It ensures that the conversation or task remains coherent over multiple turns. Ever chatted with a bot that forgets your name two messages later? *That’s bad context management*. Modern agents use **memory systems** to retain details across interactions. For instance: - A travel agent remembers your allergy to seafood when booking restaurants. - A project manager agent tracks deadlines from prior chats. Tools like LlamaIndex or LangChain’s memory modules act as the agent’s “sticky notes”, keeping conversations coherent and personalized. --- ### **Prompt Engineering: Talking to AI Like a Pro** Prompt engineering involves crafting and refining the inputs given to an LLM so that it produces the most relevant and accurate outputs. Prompt engineering isn’t just typing questions. It’s **crafting instructions LLMs can’t ignore**. For example: - *Weak prompt*: “Summarize this article.” → Gets a generic response. - *Strong prompt*: “Summarize this article in 3 bullet points for a CEO. Focus on financial risks.” → Gold. Tools like Vellum or PromptFlow help you test and refine prompts like a mad scientist. --- ### **Task Planning & Decomposition: Breaking Down the Impossible** Agents don’t solve “Plan my wedding” in one go. They **chop big tasks into bite-sized steps**: 1. Book venue → 2. Create guest list → 3. Order cake → 4. Send invites. Task planning and decomposition involve breaking down a complex problem or query into smaller, more manageable subtasks. This methodical approach helps AI systems tackle complicated challenges step by step. CrewAI’s role-based agents excel here. --- ### **Multi-Agent Communication: When Bots Team Up** Multi-agent communication refers to the way multiple AI agents interact and share information with one another to collaboratively solve a problem. This is particularly useful in systems designed to handle complex or distributed tasks. Picture a hospital where one agent diagnoses symptoms, another checks drug interactions, and a third books follow-ups. **Multi-agent systems** let specialists collaborate: - **Centralized**: Like a CEO assigning tasks (AutoGen’s manager-worker teams). - **Decentralized**: Like Uber drivers coordinating via an app (OpenAI Swarm’s experimental approach). --- ### **Real-World Applications: Where Agents *Actually* Shine** - **Customer Support**: Chatbots that resolve 80% of queries without humans. - **Healthcare**: Diagnostic agents cross-referencing symptoms with medical journals. - **Finance**: Fraud-detection agents scanning transactions in real time. - **Logistics**: Warehouse robots coordinating deliveries (physical agents + decentralized MAS). Even **creatives** use them—like AI writing teams drafting blog outlines while you sip coffee. --- ## **Wrapping Up: The Future is Agentic** We've covered a lot of ground; from the basics of AI agents to the nitty-gritty of building them. Here's the key takeaway: **Agentic AI isn't just another tech buzzword**. It's a fundamental shift in how we interact with AI systems. Whether you're a developer diving into frameworks like LangGraph and AutoGen, or a business leader exploring no-code tools like Rivet and Chatbase, there's never been a better time to jump into the agentic AI revolution. Remember: The best AI agent isn't necessarily the most complex one. It's the one that **solves real problems** while being reliable, scalable, and (most importantly) actually useful in the real world. *The future of AI isn't just about smarter algorithms, it's about systems that think, plan, and act with purpose*. And that future is already here.
8eAX8A1gfdkJ
ready-tensor
cc-by-sa
Transformer Models for Automated PII Redaction: A Comprehensive Evaluation Across Diverse Datasets
![personal-records_tiny.jpg](personal-records_tiny.jpg) --DIVIDER--# TL;DR We automated PII redaction using transformer models like RoBERTa and DeBERTa, assessing their effectiveness on five datasets. RoBERTa was selected for its balance of performance and efficiency. The study introduced a redaction script combining RoBERTa, regex, and the Faker library to ensure data privacy by replacing real PII with fictitious yet plausible data.--DIVIDER--# Introduction In today’s digital age, the protection of Personally Identifiable Information (PII) has become a crucial concern for organizations and individuals alike. The increasing prevalence of digital records and the growing reliance on data-driven technologies have elevated the risk of exposing sensitive personal data. Unauthorized access to PII can lead to privacy breaches, identity theft, and significant legal and financial repercussions. As the amount of digital data grows, manual methods for PII redaction are no longer feasible, demanding more efficient, automated solutions. In this study, we tackle the challenge of automatically identifying and redacting PII using state-of-the-art transformer models. We trained six different models—ALBERT, DistilBERT, BERT, RoBERTa, T5, and DeBERTa—on five datasets to evaluate their effectiveness in detecting and redacting various types of PII. Our aim is to provide a comprehensive comparison of these models' performance and to present a robust approach to ensuring the security and privacy of personal data in large-scale digital documents.--DIVIDER--# Datasets Our study utilizes five datasets for training and evaluation, The [n2c2 2014 (National NLP Clinical Challenges)](https://portal.dbmi.hms.harvard.edu/projects/n2c2-2014/) dataset specifically focuses on the de-identification of protected health information (PHI) in medical records and is widely recognized in clinical natural language processing research. However, it is important to note that the n2c2 2014 dataset is not publicly available. Access can be requested through the provided link. Key characteristics of the n2c2 dataset: Origin: The dataset was originally created for the n2c2 de-identification challenge, which encourages advancements in automatically removing personal health information from medical records. Entities: The n2c2 dataset includes several types of PHI, including: - PERSON (consolidated from PATIENT and DOCTOR) - LOCATION - DATE - PHONE - EMAIL - Additional entities like AGE, IDNUM, and HOSPITAL The comprehensive nature of the dataset, with its wide variety of PHI types, allows for a detailed evaluation of PHI redaction techniques. We focused on the most critical types of PII—PERSON, LOCATION, PHONE_NUMBER, DATE, and EMAIL—in our redaction models, ensuring a manageable yet impactful scope for model training and validation. --DIVIDER--In addition to the n2c2 dataset, we expanded our study by incorporating four more datasets to enhance the diversity of data and evaluate the performance of our models across different domains. These datasets include a mixture of real-world and synthetic data, focusing on PII detection and redaction in various contexts. [CoNLL-2003](https://huggingface.co/datasets/eriktks/conll2003) The CoNLL-2003 dataset is widely used for Named Entity Recognition (NER) tasks. While primarily focused on identifying entities such as PERSON, LOCATION, ORGANIZATION, and MISC, it serves as a useful benchmark for training models on real-world text, helping improve general PII detection capabilities. [PII Masking 300k](https://huggingface.co/datasets/ai4privacy/pii-masking-300k) This dataset contains real and synthetic text samples with a variety of PII types, such as PERSON, LOCATION, PHONE_NUMBER, EMAIL, and more. While the full dataset contains 300,000 samples across multiple languages, we specifically used only the English text, reducing the dataset size to approximately 37,000 samples. This subset allowed us to focus on English PII redaction while maintaining a manageable dataset size. [Synthetic PII Finance Multilingual](https://huggingface.co/datasets/gretelai/synthetic_pii_finance_multilingual) This dataset is designed for multilingual PII redaction tasks in the financial domain, offering a diverse set of texts with financial jargon and various types of PII such as PERSON, DATE, IDNUM, PHONE_NUMBER, and LOCATION. Its multilingual nature allows models to generalize across languages, increasing the scope of PII detection. [Synthetic PII Dataset](https://github.com/microsoft/presidio-research/blob/master/data/synth_dataset_v2.json) (Presidio) The final dataset we used is available through Microsoft’s Presidio tool under its official GitHub repository. This dataset contains synthetic text annotated with various types of PII. It provides a valuable resource for evaluating PII detection and redaction models in a controlled and diverse synthetic environment.--DIVIDER--# Results In this evaluation, we will focus on macro recall as the main metric because, in Named Entity Recognition (NER), recall is often the most important metric. This is due to the fact that missing important entities (false negatives) can have significant consequences in real-world applications, making it crucial to maximize the identification of all relevant entities. :::info{title="Info"} # What is Macro Recall? **Macro recall** calculates the recall for each class independently and then takes the unweighted average across all classes. This means that each class contributes equally to the final recall score, regardless of how many instances are in each class. It is particularly useful in tasks like **Named Entity Recognition (NER)**, where you want to ensure that the model performs well across both majority and minority classes. In NER, failing to detect an entity (false negative) can have a significant impact, which is why macro recall is often a key focus. ::: ## Macro Recall for models/datasets ![macro_recall.svg](macro_recall1.svg) Following the comparison of the overall performance of our models across various datasets, we have chosen to concentrate our evaluation efforts on the CoNLL-2003 dataset. This dataset is recognized as a well-established benchmark in the field, making it ideal for a detailed assessment of our PII redaction capabilities. By focusing on this dataset, we aim to provide a clear and standardized measure of our model's effectiveness and efficiency in handling sensitive information. ## Macro Metrics on CoNLL2003 dataset ![conll2003_metrics.svg](conll2003_metrics.svg) ## Macro Recall per Class ![conll2003_recall_per_class.svg](conll2003_recall_per_class1.svg) In our evaluation, DeBERTa and RoBERTa perform very closely. However, DeBERTa is the largest model, requiring significantly more time and computational resources for training. Given that RoBERTa offers nearly the same level of performance but is smaller in size and more efficient to train, we have chosen RoBERTa for PII redaction. On the other hand, T5 performs the weakest among the models. Its architecture, designed for sequence-to-sequence tasks rather than token-level predictions, likely affects its effectiveness in PII redaction. The need for more complex generation tasks makes T5 less suited for the focused, entity-specific nature of PII redaction, resulting in lower overall performance. Additionally, the detailed metrics per label for all models/datasets are available in **`scores.zip`** file in the resources section. --DIVIDER--# Redaction of Personally Identifiable Information (PII) To develop a universally applicable PII redactor that enhances the privacy and security of any dataset, we developed a script that leverages the RoBERTa model and regular expressions to redact specific types of personally identifiable information (PII). The redaction process involves two key components: the RoBERTa model identifies names, addresses, and dates, while regex patterns focus on detecting and redacting phone numbers, emails, and URLs. Below are the steps involved: Regex Patterns: We crafted precise regex patterns to pinpoint emails, phone numbers, and URLs within the textual data. These patterns are tailored to detect a variety of formats, ensuring comprehensive coverage and robust detection. RoBERTa Model: The RoBERTa model is utilized to identify more complex PII elements such as names, addresses, and dates. This AI-driven approach enhances the accuracy of PII detection beyond the capabilities of regex alone. Faker Library: To replace identified PII, we use the Faker library, which generates realistic yet fictitious data mimicking the original information's structure. This maintains the text’s integrity and utility while ensuring all sensitive details are securely anonymized. Search and Replace Functionality: Our script incorporates a dual mechanism where PII elements identified by either the RoBERTa model or regex patterns are replaced with corresponding fake data from Faker. This ensures that no real PII remains in the text, significantly reducing privacy risks while preserving the document's readability and format. Implementation: The implementation process is straightforward, involving the reading of text data, the application of the RoBERTa and regex identification methods, and the replacement of detected PII with Faker-generated data. This methodical approach ensures that the modified text remains practical for use without compromising on privacy.--DIVIDER--# Usage For using the model on you own data, download the [github repository](https://github.com/readytensor/rt_roberta_pii_redactor) and follow the `usage ` section in the readme. # Example You can see the redacted text in yellow and the text used as replacement in green: --DIVIDER-- ![redaction-example.png](redaction-example.png)--DIVIDER--# Summary In this study, we address the challenge of automatically redacting Personally Identifiable Information (PII) using transformer models. Given the growing risk of privacy breaches in the digital era, efficient PII redaction is crucial. We evaluated six models—ALBERT, DistilBERT, BERT, RoBERTa, T5, and DeBERTa—across five datasets, including the widely used n2c2 2014 dataset, to measure their effectiveness in detecting and redacting PII. Our evaluation focused on macro recall due to its importance in ensuring that critical entities are not missed. The CoNLL-2003 dataset served as the primary benchmark for this evaluation, due to its popularity in Named Entity Recognition (NER). Additional datasets included PII Masking 300k, Synthetic PII Finance Multilingual, and the Synthetic PII Dataset from Microsoft's Presidio tool. These diverse datasets ensured that the models were evaluated across a range of contexts and data types. We found that DeBERTa and RoBERTa performed very closely, with RoBERTa being chosen for PII redaction due to its similar performance and smaller size, making it more practical for training and deployment. T5 was the weakest performer, likely due to its architecture being optimized for sequence-to-sequence tasks rather than token-level predictions. To enhance privacy, we developed a redaction script using RoBERTa, along with regular expressions and the Faker library. This script replaces detected PII elements such as emails and phone numbers with realistic fake data while maintaining the document's readability and format. The implementation is straightforward and available in the provided GitHub repository.
AewIJAspNLZz
mo.abdelhamid
Ranking Fear Emotions Using EEG and Machine Learning
![hero.jpg](hero.jpg)--DIVIDER--# Abstract This publication focuses on the classification of fear emotions using EEG signals and machine learning techniques. The study explores how different levels of fear can be distinguished based on power variations across various EEG frequency bands (Alpha, Beta, Theta, Delta, Gamma) using eight specific electrodes. An experiment involving seven participants was conducted, where their brain activity was recorded while they watched horror clips, and their fear levels were ranked. The EEG signals were processed, cleaned using ICA, and the power of each frequency band was calculated. Statistical tests, such as repeated measures ANOVA and post hoc tests, revealed significant differences in brain activity based on fear levels. These insights were then used to train machine learning models (SVM, KNN and Simple ANN) to classify fear emotions into binary classes. Among the models, a simple ANN model achieved the highest accuracy of 89%, surpassing SVM and KNN classifiers. The results of the study suggest that EEG signals can effectively reflect changes in fear intensity, particularly in the frontal lobe regions, but highlight the need for further exploration with a larger sample size and additional data refinement. This work contributes to the growing body of research on emotion recognition through brain-computer interfaces, emphasizing the potential of EEG-based systems in enhancing human-computer interaction by integrating emotion detection capabilities.--DIVIDER--# Motivation Imagine a world where technology not only responds to our commands but understands how we feel—where your computer can sense your frustration as you struggle with a task or detect your excitement when you discover something new. This is the vision behind emotion recognition in Human-Computer Interaction (HCI), and it is nothing short of revolutionary. Human emotions are the core of our interactions. Whether we’re engaging with other people or with machines, emotions dictate our decisions, drive our actions, and shape our experiences. Yet, computers—our most powerful tools—remain emotionally oblivious. They follow instructions with precision, but with no sense of context or empathy. This disconnect leaves a gap in how effectively we interact with technology. That’s where emotion detection, particularly through brain signals like EEG, becomes a game changer. By analyzing subtle changes in our brainwaves, we can teach machines to recognize emotional states, like fear, excitement, or frustration. In environments where user experience is key, such as education, mental health, gaming, or customer service, this ability can create deeply personalized and adaptive systems. Imagine a tutor that senses when a student feels overwhelmed and adjusts its teaching style or a health app that monitors stress levels to offer calming activities when needed most. At the heart of this project lies a specific challenge: fear. Fear is a powerful emotion—one that triggers profound changes in our brain activity. By training machines to recognize varying levels of fear, we unlock the potential to create systems that can respond dynamically to high-stress situations. For instance, imagine an AI-driven training program for firefighters that adapts based on their stress levels, ensuring they are mentally prepared for high-pressure environments. But this isn’t just about understanding fear; it’s about teaching machines to be more human. By embedding empathy into our technological systems, we open the door to a future where machines don’t just serve us—they support us emotionally, making our interactions with them more intuitive, responsive, and human-centered. In essence, this project isn’t just about classifying fear; it’s about transforming the way we interact with the world through technology, creating a future where emotions aren’t just understood by humans, but also by the machines that serve us. The impact of such a breakthrough in HCI could be transformative, bridging the emotional gap between man and machine.--DIVIDER--# Problem Statement This project aims to classify fear emotion into three distinct intensity levels using EEG signals. By analyzing the power of EEG signals across different frequency bands and electrodes, the goal is to identify which brain regions are most affected by varying levels of fear and develop machine learning models capable of accurately classifying these intensities. The research addresses the challenge of detecting emotional states in real-time to enhance human-computer interaction, particularly in stress-inducing scenarios.--DIVIDER-- # Background on Emotions Emotion recognition using EEG signals has been widely studied, as emotions are integral to human interactions and can be detected through brainwave patterns. One of the foundational models in emotion classification is Russell’s 2D model of affect, which categorizes emotions along two dimensions: arousal (ranging from calm to excited) and valence (ranging from pleasant to unpleasant). According to this model, each emotion occupies a unique position based on its combination of arousal and valence. For instance, fear is characterized by high arousal and low valence. ![russell.png](russell.png) **Figure 1: Russell’s 2D model for emotion classification** Expanding upon this, researchers like Harsh Dabas et al. have introduced a 3D model, which incorporates dominance alongside arousal and valence. This additional dimension captures the intensity of control or power the emotion exerts, making it a more nuanced approach to emotion classification. Both models serve as a basis for understanding how emotions like fear can be mapped through brain signals, with EEG data reflecting changes across these dimensions. ![3dmodel.png](3dmodel.png) **Figure 2: 3D model for emotion classification** --DIVIDER--# Background on EEG An electroencephalogram (EEG) records electrical activity in the brain, typically measured in microvolts. EEG signals are divided into five frequency bands—Gamma, Beta, Alpha, Theta, and Delta—each associated with different mental states. Gamma relates to higher mental activity, Beta to active thinking, Alpha to relaxation, Theta to deep meditation, and Delta to dreamless sleep. EEG signals are captured using multiple scalp electrodes, ranging from 1 to 1024, with more electrodes providing more brain activity information. Key features extracted from EEG include temporal (changes over time), spectral (power in frequency bands), and spatial (origin within the brain). However, EEG signals can be disrupted by factors like muscle movement and eye blinking, which require preprocessing to clean the data. | Frequency | Associated With | | --- | --- | | Gamma (Above 30 Hz) | Higher Mental Activity, Consciousness, Perception | | Beta (13-30 Hz) | Active Thinking, Concentration, Cognition | | Alpha (7-13 Hz) | Relaxation(while awake), Pre-sleep Drowsiness | | Theta(4-7 Hz) | Dreams, Deep Meditation, REM sleep, Creativity | | Delta (Below 4 Hz) | Deep Dreamless Sleep, Loss of Body Awareness | # Electrode Placements and Their Significance In EEG studies, electrodes are placed at specific locations on the scalp to capture electrical activity in different regions of the brain. These placements follow standardized systems, such as the 10-20 system, which is used to ensure consistency across studies. The names of the electrodes reflect both their position on the scalp and the brain region they monitor. :::info{title="Info"} <h1> What is the 10-20 system?</h1> The 10-20 system is a standardized method for placing electrodes on the scalp in electroencephalography (EEG) experiments. It is named after the fact that the distances between adjacent electrodes are either 10% or 20% of the total front-to-back or right-to-left distance of the skull. This system ensures consistent, repeatable electrode positioning across studies and participants. The electrodes are positioned based on anatomical landmarks, such as the nasion (the bridge of the nose) and the inion (the bump at the back of the skull), ensuring they cover key areas of the brain. Each electrode label provides specific information about its location: ::: ![electrodes.png](electrodes.png) - F (Frontal): Electrodes with an “F” prefix are placed over the frontal lobe, which is responsible for higher cognitive functions such as decision-making, emotional regulation, and voluntary movement. In this study, the following frontal electrodes were used: - Fp1: Positioned on the left side of the forehead, this electrode captures brain activity in the left frontal lobe, an area often associated with emotional processing and mood regulation. - Fp2: Positioned on the right side of the forehead, this electrode monitors the right frontal lobe, which plays a role in emotional responses and is particularly sensitive to fear-related stimuli. - F4: Located further back on the right frontal region, this electrode captures broader cognitive functions, including emotional reactivity and executive control. - F7: Positioned on the left lateral side of the forehead, this electrode is involved in monitoring emotional regulation and facial recognition processes. - P (Parietal): Electrodes with a “P” prefix are placed over the parietal lobe, which is involved in processing sensory information, spatial orientation, and body awareness. In this study, the parietal electrodes include: - P3: Positioned on the left parietal region, this electrode helps capture sensory-motor integration and spatial awareness, contributing to how fear responses may affect bodily sensations. - P4: Located on the right parietal region, this electrode also monitors sensory processing, particularly in the context of body movement and spatial perception. - P7: Situated on the left posterior-lateral side of the scalp, this electrode captures signals from areas responsible for visual processing and spatial navigation. - P8: Positioned on the right posterior-lateral side, this electrode monitors similar functions, particularly in terms of visual and sensory integration. These electrodes were selected to capture activity from regions of the brain directly involved in both emotional and cognitive processes, making them ideal for studying fear intensity. By analyzing data from these placements, the research aims to identify how specific brain regions contribute to the experience and regulation of fear during the experiment. --DIVIDER--# Experiment Design This section details the experimental setup, methodology, and data processing pipeline used to investigate the relationship between fear intensity and brain activity, as captured by EEG signals. The experiment was carefully designed to elicit varying levels of fear in participants while ensuring the collection of high-quality EEG data. The subsequent data analysis and machine learning model development are also explained. ## Research Questions The primary objective of this research is to determine whether varying intensities of fear can be detected through EEG signals and classified into distinct levels. The research seeks to answer the following questions: 1. Can EEG signals reliably capture and differentiate between multiple levels of fear intensity? 2. Which EEG frequency bands and electrodes are most affected by changes in fear levels? 3. How accurately can machine learning models classify fear intensity based on EEG signal features?--DIVIDER--## Research Strategy The strategy to address these questions involved exposing participants to controlled fear stimuli and recording their brain activity using EEG. To generate fear responses, three carefully selected horror videos were presented to the participants. Each video was chosen to induce a different intensity of fear: low, moderate, and high. The participants’ brain activity was measured during the video sessions, followed by self-reported assessments of their emotional experience. The EEG signals were analyzed to identify significant differences between fear intensities, which were subsequently used to train machine learning models for classification purposes. --DIVIDER--## Hypothesis It is hypothesized that: - The power of EEG signals in certain frequency bands will vary significantly with the intensity of fear experienced by the participants. - Specific electrodes, particularly in the frontal lobe, will show significant changes in signal power as the level of fear increases. - These changes can be used to train machine learning models to accurately classify fear into multiple intensity levels.--DIVIDER--## EEG Experiment Design The experiment consisted of five key phases, each carefully designed to ensure the accuracy and consistency of the EEG recordings. 1. **Preparation Phase** Participants were seated in a quiet, controlled environment to minimize external stimuli. The OpenBCI Ultracortex Mark IV headset was then positioned on their heads, ensuring that all electrodes made solid contact with the scalp to reduce signal noise. The eight electrodes selected for this experiment (Fp1, Fp2, F4, F7, P3, P4, P7, and P8) were strategically placed to capture brain activity associated with emotional responses, especially in the frontal and parietal lobes. <br><br> 2. **Video Presentation Phase** The participants were shown three pre-selected horror videos, each lasting for a few minutes and designed to trigger varying levels of fear. After each video, participants were asked to self-assess their emotional response, rating the level of fear on a scale from 0 (neutral) to 3 (very intense). This subjective feedback was essential for correlating self-reported fear levels with EEG signal data.<br><br> 3. **Break Phase** To prevent emotional carryover effects, participants took a short break (2-3 minutes) between each video. During this break, they were asked to relax and return to a neutral emotional state before continuing with the next video. This phase ensured that each video was evaluated independently without emotional bias from the preceding clip. <br><br> 4. **Post-Video Feedback** Immediately after watching each video, participants provided verbal feedback on their emotional experiences. This feedback was used to identify specific moments in the videos that elicited the strongest fear responses. These time points were critical for pinpointing segments of EEG data for detailed analysis --DIVIDER--## Participants The experiment involved a total of 7 participants, consisting of 3 males and 4 females, all of whom were volunteers from different faculties at the German University in Cairo. Due to the complexity of the experiment and the need for high-quality EEG recordings, the sample size was limited to ensure manageable data collection and processing. All participants were fully briefed on the experimental procedure before the start of the study. Each participant underwent the same experimental procedure under consistent conditions. They were instructed to limit physical movement during the videos to minimize EEG signal contamination from muscle activity. The OpenBCI headset was adjusted for each participant to ensure accurate electrode placement. Following each video, participants were asked to recall specific moments that triggered the most fear, which helped in selecting key EEG data segments for analysis. The raw EEG data was recorded using OpenBCI Ultracortex Mark IV headset shown below. The headset allows up to 19 electrodes from which 8 electrodes were chosen and these electrodes are (Fp1, Fp2, F4, F7, P3, P4, P7 and P8) ![headset.png](headset.png)--DIVIDER-- # Data Processing [EEGLAB](https://www.mathworks.com/matlabcentral/fileexchange/56415-eeglab) is an open-source MATLAB toolbox widely used for preprocessing and analyzing EEG data. It offers tools for cleaning, filtering, and removing artifacts from raw EEG signals, as well as performing advanced analyses like Independent Component Analysis (ICA) and time-frequency decomposition. In this study, EEGLAB was used to preprocess the raw EEG data, preparing it for further analysis and machine learning modeling. The following steps outline the preprocessing pipeline applied: 1. **Re-referencing:** The data was re-referenced using an average method to balance the signals across electrodes. <br><br> 2. **Filtering:** A high-pass filter was applied at 0.5 Hz to remove low-frequency drifts.<br><br> 3. **Artifact Removal:** Independent Component Analysis (ICA) was employed to identify and remove components associated with eye movements, muscle artifacts, and other noise. This ensured that only clean neural data was used for further analysis.<br> This is an example of ICA separating different components of a signal. ![ica.png](ica.png) 4. **Epoch Extraction:** Three epochs, each lasting two seconds, were extracted from the moments identified by participants as the most fear-inducing. These epochs provided key time windows for comparing brain activity across different fear intensities.<br> 5. **Wavelet Decomposition:** The cleaned EEG signals were decomposed into their corresponding frequency bands (Delta, Theta, Alpha, Beta, and Gamma) using wavelet analysis. The power of each band was calculated for the selected epochs and log-transformed to normalize the data distribution. :::info{title="Info"} <h1>Wavelet Decomposition</h1> Wavelet decomposition is a signal processing technique used to break down EEG signals into different frequency bands, such as Delta, Theta, Alpha, Beta, and Gamma. Unlike traditional Fourier transforms, wavelet decomposition provides both time and frequency information, making it ideal for analyzing non-stationary signals like EEG. This allows for a detailed examination of how brainwave power varies over time and across different frequency bands. ::: --DIVIDER--# Statistical Analysis ## ANOVA A repeated measures ANOVA was conducted to examine whether significant differences existed in EEG signal power across the three levels of fear intensity (low, medium, high). This statistical test was chosen because it compares the same participants under different conditions, controlling for individual variability. The analysis was performed for each electrode and frequency band (Alpha, Beta, Theta, Delta, and Gamma), identifying which combinations showed significant changes in brain activity in response to varying fear levels. Where the assumption of sphericity was violated, the Greenhouse-Geisser correction was applied to adjust the degrees of freedom and ensure accurate p-values. The results revealed significant differences in EEG signal power, particularly in the Alpha, Theta, and Delta bands, for specific electrodes such as Fp2 and F7, indicating their sensitivity to changes in fear intensity. These findings provided critical insights for feature selection in subsequent machine learning model development. Blow is a table of P-Values for each of the bands: --DIVIDER--<table> <tr> <th>Electrode</th> <th>Alpha</th> <th>Beta</th> <th>Theta</th> <th>Delta</th> <th>Gamma</th> </tr> <tr> <td>Fp1</td> <td>0.341</td> <td>0.919</td> <td>0.346</td> <td>0.237</td> <td>0.971</td> </tr> <tr> <td>Fp2</td> <td>&lt;0.001 *</td> <td> 0.018 *</td> <td> &lt;0.001 *</td> <td> &lt;0.001 *</td> <td> 0.130</td> </tr> <tr> <td>F4</td> <td>0.078</td> <td>0.714</td> <td>0.007 *</td> <td>0.006 *</td> <td>0.725</td> </tr> <tr> <td>F7</td> <td> &lt;0.001 *</td> <td> 0.252</td> <td> 0.003 *</td> <td> &lt;0.001 *</td> <td> 0.155</td> </tr> <tr> <td>P3</td> <td>0.474</td> <td>0.355</td> <td>0.310</td> <td>0.327</td> <td>0.924</td> </tr> <tr> <td>P4</td> <td>0.710</td> <td>0.761</td> <td>0.800</td> <td>0.316</td> <td>0.870</td> </tr> <tr> <td>P7</td> <td>0.298</td> <td>0.126</td> <td>0.573</td> <td>0.102</td> <td>0.461</td> </tr> <tr> <td>P8</td> <td>0.325</td> <td>0.409</td> <td>0.319</td> <td>0.244</td> <td>0.533</td> </tr> </table> --DIVIDER--## Post-hoc test (Paired-t test) Following the repeated measures ANOVA, post-hoc tests were conducted to identify the specific differences between pairs of fear intensity levels for each electrode and frequency band that showed significant effects. This additional analysis helps determine which pairs of fear levels (low, medium, high) demonstrate statistically significant differences in EEG signal power.--DIVIDER--<h3>Fp2 Electrode</h3> <table> <tr> <th>Frequency Band</th> <th>Low vs Medium</th> <th>Low vs High</th> <th>Medium vs High</th> </tr> <tr> <td>Alpha</td> <td>&lt;0.001*</td> <td>0.023*</td> <td>0.074</td> </tr> <tr> <td>Beta</td> <td>0.011*</td> <td>0.012*</td> <td>0.624</td> </tr> <tr> <td>Theta</td> <td>&lt;0.001*</td> <td>0.112</td> <td>0.064</td> </tr> <tr> <td>Delta</td> <td>&lt;0.001*</td> <td>0.362</td> <td>0.005*</td> </tr> </table>--DIVIDER--<h3>F7 Electrode</h3> <table> <tr> <th>Frequency Band</th> <th>Low vs Medium</th> <th>Low vs High</th> <th>Medium vs High</th> </tr> <tr> <td>Alpha</td> <td>&lt;0.001*</td> <td>0.730</td> <td>0.003*</td> </tr> <tr> <td>Theta</td> <td>&lt;0.001*</td> <td>0.618</td> <td>0.003*</td> </tr> <tr> <td>Delta</td> <td>&lt;0.001*</td> <td>0.581</td> <td>&lt;0.001*</td> </tr> </table>--DIVIDER--<h3>F4 Electrode</h3> <table> <tr> <th>Frequency Band</th> <th>Low vs Medium</th> <th>Low vs High</th> <th>Medium vs High</th> </tr> <tr> <td>Theta</td> <td>0.318</td> <td>0.031*</td> <td>0.011*</td> </tr> <tr> <td>Delta</td> <td>0.263</td> <td>0.025*</td> <td>0.007*</td> </tr> </table>--DIVIDER--Overall, the post-hoc tests confirmed that significant differences in brain activity were most pronounced between the lower two levels of fear (low and medium), particularly in the Alpha, Beta, Theta, and Delta frequency bands. These results were critical in refining the selection of EEG features for the machine learning models, as they pinpointed the electrode-frequency pairs that were most sensitive to changes in fear intensity.--DIVIDER--# Fear Intensity Classification Based on the results of the post-hoc tests, it was observed that significant differences in EEG signal power occurred only between certain pairs of fear intensity levels. Therefore, the machine learning models were designed as binary classifiers to distinguish between the first and second levels of fear intensity. The models compared in this study include two versions of Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and a simple ANN. <h2> Features </h2> The machine learning models were trained using the power of nine electrode-frequency band combinations identified by the repeated measures ANOVA. These features capture the changes in EEG signal power across different fear intensity levels, providing the necessary input for the classifiers. <h2> Model Evaluation</h2> Each model was evaluated using 5-Fold cross-validation, ensuring robust performance estimation. The average accuracy of the five folds was calculated and used as the primary metric for model evaluation. <h2> Support Vector Machine (SVM) </h2> SVM is a well-known method for EEG emotion classification. Two versions of SVM were trained for this task: - SVM with RBF kernel: The radial basis function (RBF) kernel SVM achieved an average accuracy of 72%. - SVM with Linear kernel: The linear kernel SVM outperformed the RBF kernel, achieving an average accuracy of 82%. <h2>K-Nearest Neighbors (KNN)</h2> The KNN classifier, trained with K = 3, also achieved an accuracy of 72%, similar to the SVM with the RBF kernel. KNN is a simple yet effective classifier that assigns class labels based on the majority vote of its nearest neighbors. <h2>Simple ANN</h2> A small neural network, consisting of two hidden layers with 18 and 9 units respectively, was trained for the classification task. Both hidden layers and the output layer used sigmoid activation functions. The network was trained using the Adam optimizer with a learning rate of 0.01. The neural network achieved the highest accuracy among all models, with an accuracy of 89%. This demonstrates its ability to model the non-linear relationships in EEG data more effectively than the other classifiers. <br><br> ![classifiers.png](classifiers.png)--DIVIDER--# Conclusion ## Achievements This research demonstrated significant changes in brainwave activity related to varying levels of fear intensity, particularly in the Alpha, Beta, Theta, and Delta bands across specific electrodes (Fp2, F7, and F4). Machine learning models were successfully trained to classify fear intensity, with the neural network achieving the highest accuracy at 89%. The findings suggest the potential of EEG signals in detecting emotional responses, particularly fear, and the effectiveness of classifiers like SVM, KNN, and neural networks for this task.--DIVIDER--## Limitations Several limitations were encountered during the experiment. First, the small sample size of participants may have impacted the generalizability of the results. Increasing the number of participants would likely improve the robustness and reliability of the findings. Second, the application of the EEG headset was challenging, particularly for female participants, which sometimes resulted in poor signal quality due to improper electrode contact. Additionally, technical issues with the headset’s wires restricted the use of only 8 electrodes, even though the headset supports up to 19 locations. This limitation may have reduced the resolution of the recorded EEG signals and potentially impacted the accuracy of the analysis. Addressing these limitations in future studies could further enhance the accuracy and reliability of EEG-based fear detection.--DIVIDER--## Summary This study explored the relationship between EEG signals and varying levels of fear intensity, focusing on identifying significant changes in brainwave activity. EEG data from eight electrodes were analyzed, with the Alpha, Beta, Theta, and Delta frequency bands showing significant correlations to fear levels. Machine learning models, including SVM, KNN, and a neural network, were trained to classify fear intensity between low and medium levels. The neural network achieved the highest accuracy at 89%. While the study demonstrated promising results in emotion classification, limitations such as a small sample size and equipment challenges suggest that future work could further refine the methodology and improve the reliability of findings.--DIVIDER--# References 1. J. A. Russell, “Affective space is bipolar.,” Journal of Personality and Social Psychology, vol. 37, pp. 345–356, 1979. [Link](https://www.semanticscholar.org/paper/Affective-space-is-bipolar.-Russell/36f2fcd0459e24f62b20719bb809ce5cbcda240f) 2. H. Dabas, C. Sethi, C. Dua, M. Dalawat, and D. Sethia, “Emotion classification using eeg signals,” in Proceedings of the 2018 2nd International Conference on Computer Science and Artificial Intelligence, CSAI ’18, (New York, NY, USA), p. 380–384, Association for Computing Machinery, 2018. [Link](https://www.researchgate.net/publication/331424513_Emotion_Classification_Using_EEG_Signals) 3. EEGLAB Matlab package https://sccn.ucsd.edu/eeglab/index.php 4. OpenBCI (Headset Manifacturer) https://openbci.com/--DIVIDER--
dLPDzlkDb51e
ready-tensor
cc-by-sa
From Thousands to Millions: A Flexible Tool for Generating Scalable TSP Datasets
![tsp_problems_chart.png](tsp_problems_chart.png)--DIVIDER--# TL;DR We present an open-source tool for generating large-scale Traveling Salesman Problem (TSP) datasets in an efficient format, overcoming TSPLIB limitations. The tool is flexible and supports extensive training data generation, enabling modern ML approaches like Large Language Models (LLMs) for TSP solving.--DIVIDER--# Introduction and Motivation The Traveling Salesman Problem (TSP) is a well-studied NP-hard problem with applications in logistics, circuit board manufacturing, and DNA sequencing. Effective tools like Concorde and Gurobi provide strong solutions, setting high benchmarks for new approaches. Our project explores the potential of Large Language Models (LLMs) in solving TSP. While LLMs are not typically used for NP-hard problems, we aim to push their boundaries in combinatorial optimization. Benchmarking LLMs against established solvers like Concorde allows us to evaluate their effectiveness in this challenging domain. To train data-hungry models like LLMs, access to large-scale datasets that can be efficiently processed is crucial. However, existing datasets and formats, such as TSPLIB, are limited in scale, flexibility, and efficiency, making them less suitable for training modern machine learning models. To address these limitations, we have developed a tool for generating large-scale TSP datasets, enabling the creation of extensive training data and providing a flexible resource for the optimization community. In the following sections, we will detail the structure and usage of our tool, describe the datasets we've generated, and discuss potential extensions and future work.--DIVIDER--# Evaluating TSP Dataset Formats In developing our TSP dataset generator, we conducted a thorough evaluation of various data formats to find the optimal balance between efficiency, flexibility, and accessibility. This process involved assessing several options, including JSON, YAML, CSV, and more specialized formats like HDF5. Each format presented its own set of advantages and challenges: 1. **HDF5**: While highly efficient for large datasets, it requires specialized libraries and is not easily human-readable, limiting accessibility. 2. **CSV**: Simple and widely supported, but lacks the flexibility to represent complex metadata and nested structures efficiently. 3. **YAML**: More efficient than JSON in terms of file size, but reading and writing operations were notably slower in our tests. 4. **JSON**: Offers a good balance of human readability, flexibility, and widespread support across programming languages and tools. After careful consideration of these options, we ultimately chose JSON as our data format. While not the most efficient in terms of raw storage, JSON provides several advantages that align with our goals: 1. Human Readability: JSON files can be easily inspected and understood without specialized tools. 2. Flexibility: JSON's structure allows for easy addition of metadata and complex nested data. 3. Wide Support: Most programming languages and data processing tools have built-in JSON support. 4. Balance of Performance: While not the most efficient, JSON offers reasonable performance for reading and writing operations.--DIVIDER--# Configuring the TSP Dataset Generator Our TSP dataset generator uses JSON configuration files to provide a flexible and intuitive setup process. There are two main configuration files: 1. `tsp_scenarios.json`: This file defines the various TSP generation scenarios. 2. `dataset_gen_config.json`: This file specifies the general settings for dataset generation. --DIVIDER--## TSP Scenario Configuration The `tsp_scenarios.json` file allows users to define multiple TSP scenarios. Each scenario is uniquely named and includes parameters such as the number of examples, the number of nodes, and the coordinate space for sampling. Here's an example configuration: ```json { "version": "1.0.0", "generation_scenarios": { "tsp_10_50k_u_100x100": { "description": "TSP with 10 nodes, 50,000 examples, uniformly sampled in a 100x100 coordinate space.", "num_examples": 50000, "num_nodes": 10, "sampling_method": "uniform", "coordinate_space": { "x_start": 0, "x_end": 100, "y_start": 0, "y_end": 100 }, "seed": 42 } } } ``` The key `"tsp_10_50k_u_100x100"` refers to the scenario name defined by the user. The naming convention is defined in a section below. Key parameters include: - The `"generation_scenarios"` object contains a list of scenarios, each identified by a unique key (e.g., `"tsp_10_50k_u_100x100"`). - `"num_examples"`: The number of TSP instances to generate. - `"num_nodes"`: The number of nodes (cities) in each TSP instance. - `"sampling_method"`: The method used for generating node coordinates (currently supports "uniform"). - `"coordinate_space"`: Defines the boundaries for node placement. - `"seed"`: Ensures reproducibility of the generated datasets.--DIVIDER--## General Configuration The `dataset_gen_config.json` file controls the overall dataset generation process, specifying which scenario to run, how many samples to store per file, and metadata about the dataset: ```json { "scenario": "tsp_10_50k_u_100x100", "num_samples_per_file": 10000, "metadata": { "description": "Synthetic TSP problems generated for algorithm testing.", "creator": { "name": "ReadyTensor Inc.", "url": "https://www.readytensor.ai", "email": "[email protected]" }, "license": "CC BY-SA 4.0" } } ``` This file specifies: - `"scenario"`: The scenario to use from `tsp_scenarios.json`. - `"num_samples_per_file"`: The number of samples to include in each output file, allowing for efficient data management. - `"metadata"`: Represents metadata about the dataset, including description, creator information, and licensing. By adjusting these configuration files, users can easily generate TSP datasets tailored to their specific research or benchmarking needs, from small-scale test sets to large-scale training datasets, with flexible control over how the data is split across files. --DIVIDER--:::info{title="Info"} The `num_samples_per_file` parameter is particularly important for managing large datasets efficiently. It determines how the generated samples are batched into separate files. For example, if the total number of examples (num_examples in `tsp_scenarios.json`) is 100,000 and num_samples_per_file is set to 10,000, the generator will create 10 separate files, each containing 10,000 TSP problem instances. This batching approach allows for easier data handling, especially when working with machine learning frameworks that use data generators for training. :::--DIVIDER-- ## JSON Structure for Generated TSP Datasets Having selected JSON as our data format for generated TSP datasets, we designed a structure that efficiently represents TSP problems while providing comprehensive metadata. Here's an example of our JSON structure for a generated dataset:--DIVIDER--```json { "dataset_name": "tsp_10_50k_u_100x100", "description": "TSP with 10 nodes, 50,000 examples, uniformly sampled in a 100x100 coordinate space.", "total_count": 50000, "total_parts": 5, "part_number": 1, "samples_in_part": 10000, "number_of_nodes": 10, "coordinate_space": { "x_start": 0, "x_end": 100, "y_start": 0, "y_end": 100 }, "sampling_method": "uniform", "metadata": { "description": "Synthetic TSP problems generated for algorithm testing.", "creator": { "name": "ReadyTensor Inc.", "url": "https://www.readytensor.ai", "email": "[email protected]" }, "license": "CC BY-SA 4.0" }, "problems": [ { "name": "476f819e919e34e5e38d08b7ccd0fa7b", "node_coordinates": [ [17.9432, 24.2407], [48.4047, 93.5308], [48.746, 89.7431] // ... more coordinates ... ] } // ... more problems ... ] } ```--DIVIDER--Explanation: - `"dataset_name"` and `"description"`: provide context about the dataset. - `"total_count"`: specifies the total number of problem instances in the dataset. - `"total_parts"` and `"part_number"` indicate how the dataset is split across multiple files. - `"problems"`: is an array that stores the TSP problem instances, with each instance identified by a unique "name" and containing a list of "node_coordinates". --DIVIDER--:::info{title="Info"} Note that this example represents the first part (`"part_number": 1`) out of five total parts (`"total_parts": 5`) for the scenario "tsp_10_50k_u_100x100". Each part contains 10,000 samples (`"samples_in_part": 10000`), collectively making up the total 50,000 examples in the full dataset. :::--DIVIDER--Key features of this format include: 1. Comprehensive Metadata: Each file includes detailed information about the dataset and its generation parameters. 2. Batching Information: Clear details about how the dataset is split across files. 3. Problem Specifications: Descriptions of the TSP instances' characteristics. 4. Multiple Instances: The ability to store multiple TSP problems in a single file. 5. Simple Coordinate Representation: Node coordinates are stored as arrays for easy parsing. This format allows researchers and practitioners to generate, store, and load large TSP datasets efficiently, facilitating comprehensive algorithm testing and machine learning model training while maintaining accessibility and ease of use. --DIVIDER--# Using the TSP Dataset Generator The TSP Dataset Generator allows you to create large-scale TSP datasets using flexible JSON configuration files. For detailed setup instructions, refer to the [GitHub repository](https://github.com/readytensor/rt_tsp_data_gen_publication), which is also linked in the **Models** section of this publication. Below is a high-level overview of how to use the tool. **Step 1: Configure Your Scenarios** Before running the generator, set up your configuration files: - `tsp_scenarios.json`: Define the TSP scenarios, specifying parameters such as the number of nodes, problem instances, sampling method, and coordinate space. - `dataset_gen_config.json`: Specify the scenario to run, the number of samples per file, and metadata about the dataset. Refer to the Configuring the TSP Dataset Generator section for detailed instructions on setting up these files. **Step 2: Run the Generator** Once your configuration files are ready, generate your TSP datasets by running the following command: ```bash python src/tsp_generator.py ``` The generator will use the configurations to create the datasets, splitting them into multiple files based on the specified settings. **Step 3: Access Your Generated Datasets** The generated datasets will be saved in the `data/` directory, organized by scenario name. Each file will contain multiple TSP problem instances and associated metadata. For more detailed setup instructions, including virtual environment creation and dependency installation, refer to the [README in the GitHub repository](https://github.com/readytensor/rt_tsp_data_gen_publication). --DIVIDER--# Datasets with Solutions In this section, we provide a detailed breakdown of the generated datasets and the solution files that accompany them. While our TSP dataset generator only creates the problem instances, for convenience, we are also sharing solved versions of these problems. These solutions were generated using the Concorde TSP solver and can be found in the "Resources" section of this publication. ### Generated Datasets Our tool is flexible, allowing users to generate any number of samples for any number of cities. However, for convenience, we have chosen to generate and share the following six scenarios, as they are commonly used in the literature to benchmark TSP algorithms: - **10-city problems**: 50,000 samples - **20-city problems**: 50,000 samples - **25-city problems**: 50,000 samples - **50-city problems**: 50,000 samples - **100-city problems**: 50,000 samples - **200-city problems**: 50,000 samples Each dataset is stored in JSON format and is split into multiple files for easier handling. The structure of these files follows the format described in the **JSON Structure for TSP Datasets** section, which includes metadata, coordinate space specifications, and the node coordinates for each problem instance. ### Concorde Solutions While our generator does not solve the TSP problems, we have pre-solved the generated datasets using the Concorde solver. Concorde is one of the most well-known and effective solvers for TSP, providing exact solutions for even large-scale instances. The solved problems are uploaded in the "Resources" section of this publication for easy access. The structure of the solution files extends the format of the problem files, adding a `solutions` key to each problem instance. Here’s an example: ```json { "dataset_name": "tsp_10_50k_u_100x100", "description": "TSP with 10 nodes, 50,000 examples, uniformly sampled in a 100x100 coordinate space.", "total_count": 50000, "total_parts": 5, "part_number": 1, "samples_in_part": 10000, "number_of_nodes": 10, "coordinate_space": { "x_start": 0, "x_end": 100, "y_start": 0, "y_end": 100 }, "sampling_method": "uniform", "metadata": { "description": "Synthetic TSP problems generated for algorithm testing.", "creator": { "name": "ReadyTensor Inc.", "url": "https://www.readytensor.ai", "email": "[email protected]" }, "license": "CC BY-SA 4.0" }, "problems": [ { "name": "476f819e919e34e5e38d08b7ccd0fa7b", "node_coordinates": [ [17.9432, 24.2407], [48.4047, 93.5308], [48.746, 89.7431] // ... more coordinates ... ], "solutions": [ { "method": "concorde", "tour": [0, 4, 3, 5, 2, 1, 8, 9, 7, 6], "distance": 279.9693758152819 } ] } // ... more problems ... ] } ``` In this format: - The `solutions` key is nested inside each individual problem, and it is a list to allow for multiple solutions from different methods or tools (e.g., Concorde, nearest neighbor, or custom algorithms). - The `tour` key represents the order of the nodes in the optimal tour. - The `distance` key provides the total distance of the tour, computed by the solver. This structure ensures that each problem instance can store multiple solutions, enabling researchers to compare different methods or approaches. ### Accessing the Solved Datasets The solved datasets are available for download in the **Resources** section of this publication. These files are organized similarly to the original problem datasets, with each file containing a portion of the full dataset along with the corresponding solutions. This combination of generated problems and solved datasets offers a versatile resource for both testing new algorithms and benchmarking against known optimal solutions. --DIVIDER-- ## Future Work and Potential Extensions The datasets generated by our tool are just the beginning. In future work, we plan to use these datasets for training models, including exploring Large Language Models (LLMs) to solve TSP problems. While classical solvers like Concorde excel at solving TSP instances, training machine learning models on large-scale datasets presents an opportunity to develop new approaches, particularly for more complex or generalized versions of the problem. In addition to our focus on training LLMs, we are considering several potential extensions to the data generation process: - **Alternative Sampling Techniques**: Currently, our tool supports uniform sampling within a specified coordinate space. In the future, we plan to introduce more sophisticated sampling methods, such as Gaussian sampling, which could simulate clusters of cities, or clustered sampling to represent more realistic scenarios with varying densities of nodes. These alternative techniques would allow users to generate datasets that better match the characteristics of real-world problems. - **Scaling Up and Performance Improvements**: As users demand even larger datasets or more complex problem instances, we are exploring ways to further optimize the tool for performance and scalability. This may include parallel processing or optimizing the file structure for faster data access and storage. We believe that these enhancements will expand the tool's applicability and make it even more useful for researchers and practitioners working on TSP and related optimization problems. --DIVIDER-- ## Conclusion The TSP Dataset Generator provides a flexible, open-source solution for generating large-scale datasets tailored to specific problem configurations. By overcoming the limitations of traditional formats like TSPLIB, our tool enables the creation of extensive datasets that are both efficient and easy to manage. With the added benefit of storing multiple problem instances in a single file, it is well-suited for modern algorithm testing and machine learning applications. We invite the community to explore the tool, generate their own datasets, and contribute to its development. The GitHub repository is open for collaboration, and we encourage users to suggest improvements, add new features, or submit alternative sampling techniques. You can file issues or submit pull requests directly in the repository to help make the tool even better. Whether you are working on classical optimization algorithms or experimenting with new machine learning models, this tool offers a valuable resource to support your research. Together, we can continue to push the boundaries of TSP research and improve the tools available for solving one of the most challenging problems in combinatorial optimization.
DM3Ao23CIocT
ready-tensor
cc-by-sa
Python Docstrings for Machine Learning Models
![docstrings.svg](docstrings.svg)--DIVIDER--tl;dr In this tutorial, you will learn how to master the art of effectively documenting your machine learning code with Google, Numpy, and reStructuredText docstring styles for improved readability and maintainability.--DIVIDER--# Tutorial Overview Welcome to our tutorial on Python docstrings for machine learning models! As data scientists and machine learning engineers, have you ever revisited your old code and struggled to understand what it does? Or maybe a colleague needed to work with your code, and you had to spend time explaining it to them? This is where the use of docstrings in Python comes into play. In this tutorial, we will explore three popular styles of docstrings: Google-style, Numpy-style, and reStructuredText. The goal isn't to use all three, but to understand their differences, strengths, and nuances so that you can choose the style that best suits your projects and way of working. Here's a brief outline of what we'll cover: - **Introduction to Docstrings**: We'll start by discussing what docstrings are and how they can be used in Python to document your code effectively. - **Why Docstrings Matter in Machine Learning Projects**: After understanding their importance, we'll discuss why something seemingly trivial as docstrings can have a significant impact on the productivity of a data science team. - **Exploring Docstring Styles**: We will delve into the three primary docstring styles - Google, Numpy, and reStructuredText. Each has its own structure, format, and use cases which we will cover in detail. - **Choosing the Right Docstring Style for Your Project**: In this section, we'll discuss considerations for choosing the appropriate docstring style for your needs. This will include factors such as the nature of the project, the team's familiarity with the style, and the tools used for documentation generation. - **Best Practices for Writing and Maintaining Docstrings**: Lastly, we will share some practical tips and best practices for writing clear, useful, and maintainable docstrings in your machine learning projects. By the end of this tutorial, you'll have a good understanding of the different docstring styles and be able to select and implement the one that best aligns with your machine learning project's needs. Let's boost our code documentation practices together! -----DIVIDER-- # Introduction to Docstrings Before we delve into the importance of docstrings in machine learning projects, let's first understand what docstrings are. In Python, a docstring is a string literal that occurs as the first statement in a module, function, class, or method definition. Enclosed by triple quotes (either ''' or """), docstrings provide a convenient way to associate documentation with Python modules, functions, classes, and methods. Consider the following example of a function that scales a numpy array, which is a common operation in data preprocessing in machine learning: ```python import numpy as np def scale_array(array: np.ndarray, factor: float) -> np.ndarray: """ This function scales a numpy array by a given factor. Args: array (np.ndarray): The numpy array to be scaled. factor (float): The scale factor. Returns: np.ndarray: The scaled numpy array. """ return array * factor ``` In the above example, the docstring provides a brief explanation of what the function does, its parameters (`Args`), and what it returns (`Returns`). The type hints in the function definition provide additional context about the expected types of the arguments and the return type. This combination makes it easier for anyone reading the code to understand the function's purpose without having to analyze its implementation. Now that we have introduced what docstrings are and seen an example of their use in a function relevant to data science, let's move on to understand their importance in machine learning projects. --DIVIDER--# Why Docstrings Matter in Machine Learning Projects Machine Learning projects, by nature, are often complex and multifaceted. They involve intricate algorithms, sophisticated models, and layers of data preprocessing steps. This complexity is exacerbated when multiple team members are involved, each bringing their unique approach to the codebase. In this setting, code comprehension and knowledge transfer become crucial. This is where docstrings, and code documentation in general, play a vital role. Here's why docstrings matter: 1. **Improved Code Readability**: Docstrings provide a concise summary of what a piece of code or a function does. They guide the reader through the logic of the code without them having to dissect every line. 2. **Enhanced Team Efficiency**: Well-documented code is a blessing when working in teams. It allows others to understand and use your functions correctly, reducing the need for lengthy explanations. It also helps onboard new team members quicker, as they can navigate the codebase more easily. 3. **Easier Code Maintenance and Debugging**: Good docstrings make it much easier to revisit your code for maintenance, debugging, or updates. They serve as reminders of what you intended the function to do, making it easier to identify and fix issues. 4. **Useful for Auto-Generated Documentation**: Docstrings serve as the foundation for auto-generated documentation using tools like Sphinx or Doxygen. If you decide to create API documentation or a manual for your project, consistent and comprehensive docstrings can make this process smooth and efficient. 5. **Professionalism and Best Practices**: Taking the time to write good docstrings reflects on your commitment to code quality and best practices. It's a professional habit that distinguishes seasoned developers from novices. 6. **Contributions to Open Source Projects**: When contributing to open source projects, good docstrings are crucial. They ensure that your contributions can be understood and utilized by others in the community. Good documentation increases the chances of your contributions being accepted and valued by the community. We understand that writing docstrings can sometimes feel like a burden, especially when you're in the flow of coding. However, investing a little time in writing clear, concise docstrings can save you and your team much more time in the future. In the following sections, we will introduce you to three different docstring styles, helping you pick a style that best suits your needs and gets you into the habit of writing valuable docstrings.--DIVIDER--# Exploring Docstring Styles When it comes to writing docstrings in Python, there are several established styles that developers use. While the choice of style often comes down to personal preference or team conventions, certain styles offer specific advantages that may be more suited to your project's needs. In this tutorial, we will cover three of the most popular docstring styles in use today: Google, Numpy, and reStructuredText.--DIVIDER--**Example Overview for Docstring Demonstrations** Before we get into the three styles of docstrings, let's consider an example that we'll use to demonstrate each style. This example will be a simple module that contains a class and a function. Please note that we'll be using this example strictly for docstring demonstration and won't actually be showing the implementations for these functions or classes. Here are the details: **Module: `linear_models.py`** Our module, named `linear_models.py`, provides methods and classes related to simple linear regression, a foundational concept in data science and statistics. The module allows users to perform basic linear regression tasks, including fitting a model to data and evaluating its performance. **Class: `SimpleLinearRegression`** Within the `linear_models.py` module, we have the `SimpleLinearRegression` class. This class allows users to perform simple linear regression. When given training data, the class computes the slope and intercept of the best-fit line using the least squares method. The primary methods of this class are: - `fit(x_train, y_train)`: Fits the training data and computes the slope and intercept. - `predict(x_test)`: Given test data, predicts the y-values based on the previously computed slope and intercept. **Function: `calculate_r_squared(y_true, y_pred)`** The `calculate_r_squared` function is a utility within our module. It takes in the true y-values of the data and the predicted y-values from a regression model. The function then computes the R-squared value, a metric that quantifies the proportion of variance in the dependent variable that's predictable from the independent variable(s). A higher R-squared value indicates a model that explains more of the variance, making it a useful evaluation metric for regression tasks. Let's now proceed to explore the three docstring styles in detail.--DIVIDER--## Google Style Docstrings Google style docstrings are arguably one of the most user-friendly and readable formats. They are clear, concise, and organized, which makes them a great choice for both small and large scale projects. To showcase the Google style, we'll provide examples of docstrings for our data science-centric module, class, and function, which focus on linear regression modeling. Let's begin with the module: **Module: `linear_models.py`** ```python """ This module provides methods and classes related to simple linear regression. It allows users to perform basic linear regression tasks, such as fitting a model to data and evaluating its performance. Example: >>> from linear_models import SimpleLinearRegression, calculate_r_squared >>> model = SimpleLinearRegression() >>> x_train, y_train = [1, 2, 3], [1, 2, 3.1] >>> model.fit(x_train, y_train) >>> y_pred = model.predict([4, 5]) >>> r_squared = calculate_r_squared(y_train, model.predict(x_train)) >>> print(r_squared) 0.999 # hypothetical output """ ``` **Class: `SimpleLinearRegression`** ```python class SimpleLinearRegression: """ Performs simple linear regression. This class computes the slope and intercept of the best-fit line using the least squares method. Attributes: slope (float): Slope of the regression line. intercept (float): Y-intercept of the regression line. Methods: fit(x_train, y_train): Fits the training data. predict(x_test): Predicts y-values for given x-values. Example: >>> model = SimpleLinearRegression() >>> x_train, y_train = [1, 2, 3], [1, 2, 3.1] >>> model.fit(x_train, y_train) >>> model.predict([4, 5]) [4.03, 5.03] # hypothetical output """ slope: float intercept: float def fit(self, x_train: List[float], y_train: List[float]) -> None: """ Fits the training data and computes the slope and intercept. Args: x_train (List[float]): Training data for independent variable. y_train (List[float]): Training data for dependent variable. Note: This method computes the coefficients using the least squares method. """ # Code for fitting... def predict(self, x_test: List[float]) -> List[float]: """ Predicts y-values based on the previously computed slope and intercept. Args: x_test (List[float]): Data for which predictions are to be made. Returns: List[float]: Predicted y-values. Raises: ValueError: If the model is not yet fitted (i.e., slope and intercept are not computed). """ # Code for predicting... ``` **Function: `calculate_r_squared(y_true, y_pred)`** ```python def calculate_r_squared(y_true: List[float], y_pred: List[float]) -> float: """ Computes the R-squared value. Args: y_true (List[float]): True y-values. y_pred (List[float]): Predicted y-values from the regression model. Returns: float: The R-squared value. Example: >>> y_true = [1, 2, 3] >>> y_pred = [0.9, 2.1, 2.9] >>> calculate_r_squared(y_true, y_pred) 0.989 # hypothetical output Note: R-squared quantifies the proportion of variance in the dependent variable that's predictable from the independent variables. """ # Code for calculating R-squared ... ``` In this Google style docstring: - The `Args` and `Returns` sections describe function or method arguments and return values. - The `Raises` section indicates exceptions that the function or method may raise under certain conditions. - We use an `Example` section in both the module and class docstrings to show simple usage. - The `Note` inline comment provides additional details or considerations about the function or method. This style allows for clean separation between sections, which can enhance readability.--DIVIDER--## Numpy Style Docstrings Numpy style docstrings have gained immense popularity within the Python scientific computing community, in large part due to the influence of the Numpy library itself. This style is particularly appealing for projects that involve mathematical operations or when mathematical notation is frequent. It provides clear demarcation between sections with underlines, making it visually distinct and easy to navigate. For a clearer understanding, let's look at our previously discussed module, class, and function, this time documented in the Numpy style: **Module: `linear_models.py`** ```python """ linear_models ------------- This module provides methods and classes related to simple linear regression. It allows users to perform basic linear regression tasks, such as fitting a model to data and evaluating its performance. Examples -------- >>> from linear_models import SimpleLinearRegression, calculate_r_squared >>> model = SimpleLinearRegression() >>> x_train, y_train = [1, 2, 3], [1, 2, 3.1] >>> model.fit(x_train, y_train) >>> y_pred = model.predict([4, 5]) >>> r_squared = calculate_r_squared(y_train, model.predict(x_train)) >>> print(r_squared) 0.999 # hypothetical output """ ``` **Class: `SimpleLinearRegression`** ```python class SimpleLinearRegression: """ Performs simple linear regression. This class computes the slope and intercept of the best-fit line using the least squares method. Attributes ---------- slope : float Slope of the regression line. intercept : float Y-intercept of the regression line. Methods ------- fit(x_train, y_train) Fits the training data. predict(x_test) Predicts y-values for given x-values. Examples -------- >>> model = SimpleLinearRegression() >>> x_train, y_train = [1, 2, 3], [1, 2, 3.1] >>> model.fit(x_train, y_train) >>> model.predict([4, 5]) [4.03, 5.03] # hypothetical output """ slope: float intercept: float def fit(self, x_train: List[float], y_train: List[float]) -> None: """ Fits the training data and computes the slope and intercept. Parameters ---------- x_train : list of float Training data for independent variable. y_train : list of float Training data for dependent variable. Notes ----- This method computes the coefficients using the least squares method. """ # Code for fitting... def predict(self, x_test: List[float]) -> List[float]: """ Predicts y-values based on the previously computed slope and intercept. Parameters ---------- x_test : list of float Data for which predictions are to be made. Returns ------- list of float Predicted y-values. Raises ------ ValueError If the model is not yet fitted (i.e., slope and intercept are not computed). """ # Code for predicting... ``` **Function: `calculate_r_squared`** ```python def calculate_r_squared(y_true: List[float], y_pred: List[float]) -> float: """ Computes the R-squared value. R-squared quantifies the proportion of variance in the dependent variable that's predictable from the independent variables. Parameters ---------- y_true : List[float] True y-values. y_pred : List[float] Predicted y-values from the regression model. Returns ------- float The R-squared value. Examples -------- >>> y_true = [1, 2, 3] >>> y_pred = [0.9, 2.1, 2.9] >>> calculate_r_squared(y_true, y_pred) 0.989 # hypothetical output """ # Code for calculating R-squared ... ``` With Numpy style docstrings, each section (e.g., Parameters, Returns, Raises, and Examples) is distinctly separated, making it easy to locate and understand specific details. Parameters and Returns sections are verbose, ensuring clarity, and the style's ability to include notes, warnings, and usage examples further enriches the documentation.--DIVIDER--## reStructuredText Style Docstrings reStructuredText (reST) style docstrings provide a formalized way to write documentation. This format is especially powerful due to its ability to support rich text markup, allowing for easy generation of HTML or PDF documentation using tools like Sphinx. **Module: `linear_models.py`** ```python """ This module provides methods and classes related to simple linear regression. It allows users to perform basic linear regression tasks, such as fitting a model to data and evaluating its performance. .. example:: >>> from linear_models import SimpleLinearRegression, calculate_r_squared >>> model = SimpleLinearRegression() >>> x_train, y_train = [1, 2, 3], [1, 2, 3.1] >>> model.fit(x_train, y_train) >>> y_pred = model.predict([4, 5]) >>> r_squared = calculate_r_squared(y_train, model.predict(x_train)) >>> print(r_squared) 0.999 # hypothetical output """ ``` **Class: `SimpleLinearRegression`** ```python class SimpleLinearRegression: """ Performs simple linear regression. This class computes the slope and intercept of the best-fit line using the least squares method. :ivar slope: Slope of the regression line. :ivar intercept: Y-intercept of the regression line. :methods: fit(x_train, y_train), predict(x_test) .. example:: >>> model = SimpleLinearRegression() >>> x_train, y_train = [1, 2, 3], [1, 2, 3.1] >>> model.fit(x_train, y_train) >>> model.predict([4, 5]) [4.03, 5.03] # hypothetical output """ slope: float intercept: float def fit(self, x_train: List[float], y_train: List[float]) -> None: """ Fits the training data and computes the slope and intercept. :param x_train: Training data for independent variable. :type x_train: List[float] :param y_train: Training data for dependent variable. :type y_train: List[float] .. note:: This method computes the coefficients using the least squares method. """ # Code for fitting... def predict(self, x_test: List[float]) -> List[float]: """ Predicts y-values based on the previously computed slope and intercept. :param x_test: Data for which predictions are to be made. :type x_test: List[float] :return: Predicted y-values. :rtype: List[float] :raises ValueError: If the model is not yet fitted (i.e., slope and intercept are not computed). """ # Code for predicting... ``` **Function: `calculate_r_squared`** ```python def calculate_r_squared(y_true: List[float], y_pred: List[float]) -> float: """ Computes the R-squared value. :param y_true: True y-values. :type y_true: List[float] :param y_pred: Predicted y-values from the regression model. :type y_pred: List[float] :return: The R-squared value. :rtype: float .. example:: >>> y_true = [1, 2, 3] >>> y_pred = [0.9, 2.1, 2.9] >>> calculate_r_squared(y_true, y_pred) 0.989 # hypothetical output .. note:: R-squared quantifies the proportion of variance in the dependent variable that's predictable from the independent variables. """ # Code for calculating R-squared ... ``` As you can observe, reStructuredText uses colons (`:`) for argument and return type specifications. The `.. note::`, `.. example::`, and other directives add richness to the docstrings, making them more comprehensive and user-friendly.--DIVIDER--:::info{title="Info"} **Integrating reStructuredText with Sphinx** While reStructuredText is a markup language in its own right, its relevance to Python developers is often closely tied to the [Sphinx](https://www.sphinx-doc.org/en/master/) documentation generator. Sphinx utilizes reStructuredText to produce rich, navigable documentation for software projects. By following a consistent style in your docstrings and combining it with Sphinx, you can easily generate professional-quality documentation for your projects. If you're considering producing detailed documentation for larger projects, integrating reStructuredText with Sphinx is highly recommended. :::--DIVIDER--# Choosing the Right Docstring Style for Your Project When it comes to docstring styles, there isn't a one-size-fits-all solution. The best style for your project depends on several factors, including the complexity of your project, your team's preferences, and the tools you're using. **Google Style**: If your team prefers a style that is simple to write and easy to read, the Google style might be the best choice. It is concise, human-readable, and doesn't require you to learn a new markup language. This style is a great choice for smaller projects or projects where the primary audience is the code's users rather than developers. **NumPy Style**: If your project involves complex data types or mathematical operations, the NumPy style might be more appropriate. This style excels in projects that require precise, detailed explanations for parameters and return types—something often necessary in data science and machine learning projects. NumPy-style docstrings can be a bit verbose, but they can significantly improve the clarity of your code. **reStructuredText Style**: If your project involves generating documentation using Sphinx, the reStructuredText style is the best choice. It supports a variety of additional directives, making it the most flexible option for creating rich, structured documentation. Remember, the main purpose of docstrings is to provide clear, understandable explanations for your code's functionality. The best docstring style for you is the one that helps you achieve this goal most effectively. While it's good practice to maintain consistency in your project, don't hesitate to switch styles if a different one better suits your needs. Regardless of the style you choose, the use of docstrings will undoubtedly make your code more understandable, maintainable, and reusable, thereby increasing the overall quality of your machine learning project.--DIVIDER--# Best Practices for Writing and Maintaining Docstrings Maintaining high-quality docstrings is an ongoing process. Here are some best practices that can help ensure your docstrings are as helpful as possible: 1. **Write Comprehensive Docstrings**: A docstring should describe what a function does, its input parameters, its return values, and any exceptions it raises. If applicable, it should also include a brief example of usage. A well-written docstring allows others (and future you!) to understand your code without having to read and understand all of its source code. 2. **Keep Your Docstrings Up to Date**: As your code changes, make sure your docstrings are updated to reflect those changes. Outdated or incorrect documentation can be even more confusing than no documentation at all. 3. **Be Concise but Clear**: While docstrings should be detailed, they shouldn't be excessively verbose. Aim to make your docstrings as concise as possible without sacrificing clarity. 4. **Use Third Person Point of View**: Write your docstrings as if you're describing the function to another person. For example, instead of "We calculate the mean", write "This function calculates the mean". 5. **Maintain Consistency**: Within a project, try to maintain a consistent style of docstrings. This makes it easier for others to understand your codebase. 6. **Avoid Mentioning Redundant Details**: If a detail is obvious from the source code, there's no need to include it in the docstring. For instance, if a function named `add_numbers` takes two arguments `num1` and `num2`, you don't need to mention in the docstring that the function adds numbers—it's self-explanatory. 7. **Use Type Hints**: Type hints complement docstrings by providing explicit indications of a function's input and output types. This can make your code even more understandable. Incorporating these practices will enhance the effectiveness of your docstrings, making your code much easier to understand and maintain—crucial aspects in machine learning projects, especially when they grow in size or when you're collaborating with others.--DIVIDER--# Summary This tutorial offers a deep dive into three primary docstring styles prevalent in Python: Google, Numpy, and reStructuredText. Tailored for data scientists and machine learning engineers, the guide highlights the importance of thorough documentation, especially in complex data-driven projects. With clear examples, including type hints and in-doc examples, practitioners are equipped to write clear, concise, and informative docstrings, ensuring that ML models and data processing functions are understandable and maintainable by teams and future contributors.--DIVIDER--# References 1. [PEP 257 - Docstring Conventions](https://www.python.org/dev/peps/pep-0257/) - The official Python Enhancement Proposal that outlines conventions for writing docstrings in Python. 2. [Numpydoc](https://numpydoc.readthedocs.io/en/latest/format.html) - A detailed guide on the Numpydoc style of docstrings, primarily used in scientific computing. 3. [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings) - Google's comprehensive style guide for Python, which includes a section on docstrings. 4. [reStructuredText Primer](https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html) - An introduction to reStructuredText, used commonly in Python documentation. 5. [Sphinx Documentation](https://www.sphinx-doc.org/en/master/index.html) - The official guide and documentation for Sphinx, a powerful documentation generator that works well with reStructuredText and Python docstrings. 6. [PEP 484 - Type Hints](https://www.python.org/dev/peps/pep-0484/) - The official Python Enhancement Proposal introducing type hints to the language.
EeNv3K1byb1V20OLZbBOd
ready-tensor
cc-by-sa
Ready Tensor Forecasting Benchmark
![publication-narrow.webp](publication-narrow.webp)--DIVIDER--# Ready Tensor Forecasting Benchmark ## Abstract The purpose of this project is to provide a comprehensive and systematic evaluation of forecasting models across diverse time series datasets. This project aims to help researchers and practitioners identify effective forecasting models tailored to different data characteristics, while highlighting the strengths of various model categories, from tabular and neural network models to advanced foundational models. By benchmarking models on 24 real-world datasets with varying time frequencies and covariates, we examine model performance in realistic forecasting scenarios. Our findings reveal that tabular models, specifically extra trees and random forest, and neural network models such as PatchMixer, Variational Encoder and NBeats consistently exhibit superior performance. Among the foundational forecasting models, the Chronos models, leveraging large-scale, pretrained techniques, demonstrate exceptional zero-shot learning capabilities, achieving high performance even on datasets not included in their training corpus. This underscores the significant potential of foundational models in enhancing forecasting accuracy and generalization across various domains. Despite the advanced capabilities of these models, naive benchmarks remain indispensable for evaluating model complexity against forecasting efficacy, particularly in scenarios lacking clear seasonal patterns, such as yearly-frequency datasets. This benchmark project highlights the evolving landscape of time series forecasting, where the integration of large-scale, pretrained models like Chronos is poised to redefine industry standards for accuracy and applicability. ## Introduction The purpose of the "Ready Tensor Forecasting Benchmark" project is to establish a comprehensive, evolving benchmark that enables clear comparisons of forecasting models across a wide range of real-world scenarios. By comparing a growing collection of model types—including naive (baseline), statistical, machine learning, neural network, and hybrid approaches—this project aims to help researchers and practitioners identify the most effective models for specific forecasting tasks, with an emphasis on accuracy, adaptability, and efficiency. This project focuses on univariate forecasting, predicting a single response variable while accommodating exogenous features (covariates) to improve accuracy. Using 24 diverse datasets with time frequencies ranging from hourly to yearly and synthetic datasets, we explore a broad spectrum of scenarios, distinguishing datasets by temporal characteristics and covariate types, from static and historical to future-oriented variables. This variety provides a realistic setting to examine model performance under different conditions, enabling users to choose models that best meet their forecasting needs. Our evaluation relies on metrics like RMSE, MAE, RMSSE, and MASE, where RMSSE and MASE are especially valuable for comparing performance relative to simple naive forecasts. This evolving benchmark continually incorporates new models, staying current with advances in forecasting technology and ensuring practical relevance for users looking to optimize their forecasting strategies. ## Models and Categories In this project, forecasting models are systematically selected from five distinct categories based on their underlying methodologies and typical use cases. This categorization facilitates a clearer comparison of model performances across different types of time series data. Below is an overview of each category along with examples to illustrate the diversity of models considered: ### 1. Naive Models Naive models establish the baseline for forecasting performance, utilizing straightforward prediction strategies based on historical data trends. **Examples:** Naive Mean, Naive Drift, Naive Seasonal. ### 2. Statistical Models These models employ traditional statistical methods to analyze and forecast time series data, capturing explicit components such as trend and seasonality. **Examples:** ARIMA (AutoARIMA), Theta, BATS. ### 3. Hybrid (Statistical + Machine Learning) Models This category includes models that combine elements of both statistical and machine learning approaches to leverage the strengths of each in forecasting applications. These hybrids aim to improve forecast accuracy and reliability by integrating statistical models' interpretability with machine learning models' adaptability. **Examples:** Prophet (combines decomposable time series models with machine learning techniques), D-Linear Forecaster in GluonTS (merges linear statistical forecasting with machine learning enhancements). ### 4. Machine Learning Models Machine Learning models apply various algorithmic approaches learned from data to predict future values, including both regression and classification techniques tailored for forecasting. **Examples:** Random Forest, Gradient Boosting Machines (GBM), Support Vector Machines (SVM), Elastic Net Regression. ### 5. Neural-Network Models Utilizing deep learning architectures, Neural-Network models are adept at modeling complex and non-linear relationships within large datasets. **Examples:** NBeats, RNN (LSTM), Convolutional Neural Networks (CNN), PatchTST, TSMixer, Transformer models. ### 6. Foundational Models for Time Series Forecasting Foundational models utilize large-scale, pretrained techniques to forecast across diverse domains. They are trained on tokenized time series data and apply transformer-based learning. These models, like the Chronos series from Amazon and Moirai from Salesforce, are remarkable due to their zero-shot prediction and robust generalization capabilities. **Examples:** Chronos from Amazon and Moirai from SalesForce. ## Model Implementations Our approach to implementing forecasting models was designed to ensure comparability and objectivity across the benchmarking process. Key aspects of our model implementations include: #### Generic Implementations Models were implemented generically without special alterations for specific datasets or engaging in dataset-specific feature engineering. #### Open-Source Libraries Where feasible, we utilized established open-source libraries such as Darts, GluonTS, Skforecast, and Nixtla. These libraries provided robust preprocessing and model implementations. For specific comparisons, we also developed a number of custom models to supplement the analysis alongside these library-based implementations. #### Preprocessing Variability Performance differences may partly arise from the diverse preprocessing features of these libraries. #### Hyper-parameter Tuning For each model, we aimed to identify hyper-parameters that were effective on a global level, across all datasets, without pursuing dataset-specific tuning. Dataset specific hyper-parameter tuning for each model would be cost-prohibitive considering the large number of datasets and models involved in this benchmark. This approach may have inherently favored simpler models with fewer hyper-parameters to adjust. #### Chronos and Moirai In the case of foundational models such as Chronos and Moirai, the training function effectively acts as a no-op (no operation). These models are zero-shot learners, pre-trained on a vast array of time series data, and thus require no additional training when applied to new datasets within our benchmark. ## Dataset Characteristics: Frequencies and Covariates In our project, datasets are not only categorized by their temporal frequencies but also distinguished by the presence and types of covariates they include. This classification acknowledges the complexity of real-world forecasting tasks, where additional information (exogenous variables) can significantly influence model performance. The list of datasets is as follows: | Dataset | Dataset Industry | Time Granularity | Series Length | # of Series | # Past Covariates | # Future Covariates | # Static Covariates | | --------------------------------------------------- | :-------------------------: | :--------------: | :-----------: | :---------: | :---------------: | :-----------------: | :-----------------: | | Air Quality KDD 2018 | Environmental Science | hourly | 10,898 | 34 | 5 | 0 | 0 | | Airline Passengers | Transportation / Aviation | monthly | 144 | 1 | 0 | 0 | 0 | | ARIMA Process | None (Synthetic) | other | 750 | 25 | 0 | 0 | 0 | | Atmospheric CO2 Concentrations | Environmental Science | monthly | 789 | 1 | 0 | 0 | 0 | | Australian Beer Production | Food & Beverage / Brewing | quarterly | 218 | 1 | 0 | 0 | 0 | | Avocado Sales | Agriculture and Food | weekly | 169 | 106 | 7 | 0 | 1 | | Bank Branch Transactions | Finance / Synthetic | weekly | 169 | 32 | 5 | 1 | 2 | | Climate Related Disasters Frequency | Climate Science | yearly | 43 | 50 | 6 | 0 | 0 | | Daily Stock Prices | Finance | daily | 1,000 | 52 | 5 | 0 | 0 | | Daily Weather in 26 World Cities | Meteorology | daily | 1,095 | 25 | 16 | 0 | 1 | | GDP per Capita Change | Economics and Finance | yearly | 58 | 89 | 0 | 0 | 0 | | Geometric Brownian Motion | None (Synthetic) | other | 504 | 100 | 0 | 0 | 0 | | M4 Forecasting Competition Sampled Daily Series | Miscellaneous | daily | 1,280 | 60 | 0 | 0 | 0 | | M4 Forecasting Competition Sampled Hourly Series | Miscellaneous | hourly | 748 | 35 | 0 | 0 | 0 | | M4 Forecasting Competition Sampled Monthly Series | Miscellaneous | monthly | 324 | 80 | 0 | 0 | 0 | | M4 Forecasting Competition Sampled Quarterly Series | Miscellaneous | quarterly | 78 | 75 | 0 | 0 | 0 | | M4 Forecasting Competition Sampled Yearly Series | Miscellaneous | yearly | 46 | 100 | 0 | 0 | 0 | | Online Retail Sales | E-commerce / Retail | daily | 374 | 38 | 1 | 0 | 0 | | PJM Hourly Energy Consumption | Energy | hourly | 10,223 | 10 | 0 | 0 | 0 | | Random Walk Dataset | None (Synthetic) | other | 500 | 70 | 0 | 0 | 0 | | Seattle Burke Gilman Trail | Urban Planning | hourly | 5,088 | 4 | 0 | 0 | 4 | | Sunspots | Astronomy / Astrophysics | monthly | 2,280 | 1 | 0 | 0 | 0 | | Multi-Seasonality Timeseries With Covariates | None (Synthetic) | other | 160 | 36 | 1 | 2 | 3 | | Theme Park Attendance | Entertainment / Theme Parks | daily | 1,142 | 1 | 0 | 56 | 0 | More information regarding each of the 24 datasets can be found in this public repository: https://github.com/readytensor/rt-datasets-forecasting. ## Evaluation Method - **Train/Test Split**: We use a simple train/test split along the temporal dimension, ensuring models are trained on historical data and assessed on unseen future data. This approach, chosen for its computational efficiency and the breadth of datasets, avoids cross-validation to reduce computational load. With a benchmark involving 24 datasets, the risk of over-fitting is lowered. ## Metrics - **RMSE (Root Mean Squared Error):** Measures the square root of the average squared differences between forecasted and actual values. - **RMSSE (Root Mean Squared Scaled Error):** A scaled version of RMSE, dividing a model's RMSE by the RMSE from the Naive Mean Forecast Model, which predicts using the historical mean. - **MAE (Mean Absolute Error):** Calculates the average magnitude of the errors between forecasted and actual values. - **MASE (Mean Absolute Scaled Error):** Scales MAE by dividing the model's MAE by the MAE from the Naive Mean Forecast Model. - **sMAPE (Symmetric Mean Absolute Percentage Error):** A symmetric measure that calculates the percentage error between forecasted and actual values. - **WAPE (Weighted Absolute Percentage Error):** Measures the accuracy of a model by calculating the percentage error weighted by actual values. - **R-squared:** Indicates the proportion of the variance in the dependent variable that is predictable from the independent variable(s). RMSSE and MASE are particularly emphasized for their ability to provide context-relative performance assessments, scaling errors against those of a simple benchmark (the Naive Recent Window Mean Forecast Model) to ensure comparability across different scales and series characteristics. **Note**: Training and inference times for all models on all datasets have been collected and are being analyzed. Detailed results will be available on this page soon, providing insights into computational efficiency alongside accuracy metrics. ## Key Results The benchmarking results are summarized in the following heatmap based on the RMSSE metric. Lower RMSSE scores indicate better forecasting performance. ![Forecasting Models Performance Heatmap](https://github.com/readytensor/rt_forecasting_benchmark/blob/main/outputs/forecasting_models_heatmap.png?raw=true) The heatmap visualizes the benchmarking results for 50 selected models out of a total pool of 92 (as of April 30, 2024). Models were selectively included based on performance, uniqueness, and fairness criteria. Specifically, models that performed significantly worse than others, such as the Fast Fourier Transform, were excluded. To avoid redundancy, only the best implementation of models appearing multiple times across different libraries (e.g., XGBoost in Scikit-Learn, Skforecast, MLForecast) is featured. The results can be summarized as follows: - **Machine-learning Models:** Extra trees and random forest models demonstrate the best overall performance. - **Neural Networks:** PatchMixer, Variational Encoder, CNN, NBeats, PatchTST, and MLP emerged as top neural network models, with Variational Encoder showing the best results, notably including pretraining on synthetic data. - **Simpler Models:** DLinear and Ridge regression models show strong performance, highlighting efficiency in specific contexts. - **Statistical Model:** TBATS stands out among statistical models for its forecasting accuracy. - **Foundational Models:** The Chronos-T5-Large model, within the Chronos family, ranks among the top performers. This performance showcases the model's exceptional zero-shot learning capabilities, highlighting its robust generalization and forecasting accuracy across unseen datasets. The Moirai Large and Base models perform well, although not as competitively as the Chronos models. - **Yearly Datasets:** On yearly datasets, none of the advanced models surpassed the performance of the naive mean model, highlighting the difficulty of forecasting with datasets that lack conspicuous seasonal patterns commonly found in higher frequency datasets. **Note on Pretraining:** The NBeats model improved in performance upon pretraining on synthetic data. This highlights pretraining on synthetic data or other real-world datasets as a promising avenue for enhancing neural network models' forecasting capabilities. This approach warrants further exploration to potentially boost the performance of other neural network architectures in this benchmark. **Note on Chronos Model Performance:** While the Chronos models exhibit impressive zero-shot capabilities, it's important to acknowledge potential train/test leakage. The Chronos training corpus includes a large collection of publicly available datasets, such as samples from the M4 competition and synthetic datasets. Given our benchmark includes similar datasets, there's a possibility some of our benchmark datasets were part of Chronos's training set. However, with 24 datasets in total, the majority of our benchmark datasets likely remain distinct from the Chronos training corpus, preserving the integrity of our evaluation. ## Project Summary Tabular models like extra trees and random forest are the top performers in our study, closely followed by neural network models such as PatchMixer, Variational Encoder, CNN and NBeats. The Chronos family of foundational models, also rank near the top of the scoreboard. The Chronos models are zero-shot learners, meaning they can perform well on datasets that were not part of their training corpus. Their highly competitive performance underscores the potential of large-scale, pre-trained models in forecasting. Naive models continue to play a crucial role as benchmarks, reminding us that complexity does not always equate to superior performance, particularly on datasets with yearly frequencies.
fUTy90FWorvg
3rdson
none
Accelerate Your AI/ML Career with Open-Source Contributions
Whether you are a beginner eager to build your portfolio or a seasoned pro looking to collaborate on meaningful projects, open-source AI/ML offers endless opportunities to learn, grow, and make an impact. In this article, we will introduce you to curated open-source projects; from emerging frameworks like Swarmauri to industry staples like PyTorch and Hugging Face. You will learn how to contribute effectively, avoid common pitfalls, and turn your code into a career-building asset. But first, let’s tackle the basics. ## What Are Open Source Projects? Open-source projects are projects whose source code is publicly available for anyone to view, use, modify, and contribute to. These projects are usually maintained by a community of developers who collaborate to improve the software/framework/library, fix bugs, add new features, and ensure its overall stability. In Al/ML, open-source projects play a huge role in innovation. Many of the tools and frameworks we use daily, like TensorFlow, PyTorch, LangChain, Scikit-Learn, etc., are open source, meaning anyone can contribute to their development. ## Why Should You Contribute to Open-Source Projects? ![Top-5-Reasons-to-Contribute-to-Open-Source-Project.png](Top-5-Reasons-to-Contribute-to-Open-Source-Project.png) Contributing to open-source projects is one of the most rewarding ways to grow as a developer, data scientist, or AI/ML professional. Whether you're just starting or have years of experience, getting involved in open source offers a unique set of benefits that can help you build your skills, expand your network, and elevate your career. Here’s why you should consider contributing: 1. **Build Real-World Experience** Open-source projects provide a platform to work on real-world problems and cutting-edge technologies. Unlike personal projects or coursework, contributing to open source exposes you to production-level code, collaborative workflows, and industry-standard tools. This hands-on experience is invaluable and can set you apart in the job market. 2. **Sharpen Your Technical Skills** Whether it’s debugging, writing documentation, or optimizing algorithms, open-source contributions allow you to hone your technical skills in a practical setting. You will also get to work with tools and frameworks that are widely used in the industry, such as TensorFlow, PyTorch, or Hugging Face, giving you a competitive edge. From my personal experience, you will learn Git and GitHub very well when you start contributing to open source. I learnt Git and GitHub the best when contributing to open-source projects 3. **Showcase Your Expertise** Your contributions to open-source projects are public and visible to everyone. This means you can showcase your work on platforms like GitHub, LinkedIn, or your portfolio. Potential employers often look at open-source contributions as proof of your skills, initiative, and ability to collaborate with others. 4. **Learn from the Best** Open-source projects are often maintained by some of the brightest minds in the field. By contributing, you get the opportunity to learn from experienced developers, receive feedback on your code, and understand best practices in software development and machine learning. 5. **Give Back to the Community** Many of the tools and frameworks we use daily, like Scikit-learn, Jupyter Notebooks, or LangChain, are all open source. Contributing to these projects is a way to give back to the community that has built the tools you rely on. It’s a chance to support innovation and make these resources better for everyone. 6. **Expand Your Network** Open-source communities are global and diverse. By contributing, you will connect with like-minded individuals, collaborate with professionals from different backgrounds, and build relationships that can lead to mentorship, job opportunities, or even lifelong friendships. 7. **Boost Your Confidence** Seeing your code merged into a popular project or receiving positive feedback from maintainers can be incredibly motivating. It’s a tangible way to measure your progress and gain confidence in your abilities as a developer or data scientist. 8. **Stay Ahead of the Curve** Open-source projects are often at the forefront of innovation. By contributing, you will stay updated on the latest trends, tools, and techniques in AI/ML. This knowledge can help you stay relevant in a fast-evolving field. 9. **It’s Beginner-Friendly** You don’t need to be an expert to contribute to open source. Many projects have beginner-friendly issues labelled as `good-first-issue` or `help-wanted`. These are great starting points for newcomers to get their feet wet and gradually build their confidence. 10. **Make an Impact** Your contributions, no matter how small, can have a significant impact. Whether it’s fixing a bug, improving documentation, or adding a new feature, your work can help thousands of users and developers around the world. :::info{title="NOTE"} If you already have an open-source project in mind and just want to learn how to contribute effectively, skip to the end of the article. Otherwise, stick around as we’ve got plenty of great options to explore! ::: --- In this article, we will be categorizing these open-source projects based on their focus areas, making it easier. for you to find the ones that match your interests and skill level. We will cover a mix of well-established, stable projects and emerging projects that are worth keeping an eye on. This way, whether you're looking for something reliable to contribute to or want to get involved in the next big thing, you will have plenty of options to choose from. Let's jump right into it ![1729504471981.png](1729504471981.png) --- ## Core Frameworks & Libraries 1. **PyTorch** PyTorch is a flexible deep learning framework developed by Meta AI in **2016**, which is renowned for its dynamic computation graph and Python-first design. It dominates research workflows for tasks like computer vision, NLP, and reinforcement learning. 🔗 [GitHub](https://github.com/pytorch/pytorch) | [Docs](https://pytorch.org/docs/stable/index.html) 2. **TensorFlow** TensorFlow is Google’s flagship machine learning framework which was released in **2015** and optimized for production-grade deployments. TensorFlow powers industrial-scale AI applications. 🔗 [GitHub](https://github.com/tensorflow/tensorflow) | [Docs](https://www.tensorflow.org/) 3. **Scikit-Learn** This is the go-to Python library for classical/traditional machine learning (_e.g., regression, clustering, SVMs_). Released in **2007**, it offers simple APIs for data preprocessing, model training, and evaluation. 🔗 [GitHub](https://github.com/scikit-learn/scikit-learn) | [Docs](https://scikit-learn.org/stable/) 4. **JAX** Jax is a high-performance numerical computing library from Google (**2018**), combining NumPy-like syntax with automatic differentiation and GPU/TPU acceleration. Key for cutting-edge research in physics, optimization, and ML. 🔗 [GitHub](https://github.com/google/jax) | [Docs](https://jax.readthedocs.io/) 5. **XGBoost** XGBoost is a scalable gradient-boosting library (**2014**) for structured/tabular data. It dominates Kaggle competitions and enterprise ML pipelines with its speed, accuracy, and support for distributed training. 🔗 [GitHub](https://github.com/dmlc/xgboost) | [Docs](https://xgboost.readthedocs.io/en/stable/) 6. **MLflow** A platform for managing the ML lifecycle (**2018**), including experiment tracking, model packaging, and deployment. Critical for MLOps and collaborative workflows. 🔗 [GitHub](https://github.com/mlflow/mlflow) | [Docs](https://mlflow.org/docs/latest/index.html) --- ## Generative AI / Agentic AI / NLP Projects 1. **LangChain** This is one of the first and most popular open-source frameworks designed to simplify the development of LLM-powered applications. Developed in **2022**, it provides tools for chaining LLM calls, integrating with external data sources, and building AI-driven applications like chatbots and autonomous agents. 🔗 [GitHub](https://github.com/langchain-ai/langchain) | [Docs](https://python.langchain.com/docs/introduction/) 2. **LangGraph** Built on top of LangChain, this framework is designed for creating stateful, multi-agent, and graph-based workflows with LLMs. Developed in **2023**, it excels in constructing complex AI applications that require dynamic task coordination. 🔗 [GitHub](https://github.com/langchain-ai/langgraph) | [Docs](https://langchain-ai.github.io/langgraph/) 3. **CrewAI** An open-source framework for building and managing multi-agent AI workflows, CrewAI enables developers to design teams of AI agents that collaborate efficiently. Developed in **2023**, it is ideal for applications that benefit from role-based, coordinated task execution. 🔗 [GitHub](https://github.com/joaomdmoura/crewai) | [Docs](https://docs.crewai.com/introduction) 4. **LlamaIndex** This data framework helps LLMs connect with external data sources by providing tools for indexing, retrieving, and querying information. Developed in **2022**, it streamlines the creation of knowledge-driven AI applications. 🔗 [GitHub](https://github.com/jerryjliu/llama_index) | [Docs](https://docs.llamaindex.ai/en/stable/) 5. **Swarmauri** Swarmauri is still in its early stages as an open-source tool for building, testing, and deploying AI-powered applications and agents. With a low adoption rate so far, it’s a great time to contribute and help shape its future. Swarmauri was first released in 2024. 🔗 [GitHub](https://github.com/swarmauri/swarmauri-sdk) | [Docs](https://docs.swarmauri.com/index.html) 6. **Pydantic AI** Developed by the creators of Pydantic, Pydantic AI is a Python-based framework designed to simplify the development of production-grade applications powered by Generative AI. Released in 2024, it provides robust tools for building, validating, and deploying AI agents, ensuring reliability and scalability in real-world applications. While still in its early stages, Pydantic AI is rapidly gaining traction for its focus on developer productivity and seamless integration with existing AI workflows. 🔗 [GitHub](https://github.com/pydantic/pydantic-ai) | [Docs](https://ai.pydantic.dev/) 7. **AgentGPT** An emerging framework that empowers developers to create fully autonomous agents capable of multi-step reasoning and task execution. Released in **2023**, AgentGPT is gaining attention for its simplicity and robust design in building interactive AI systems. 🔗 [GitHub](https://github.com/reworkd/AgentGPT) | [Docs](https://docs.reworkd.ai/introduction) 8. **SmolAgents** A lightweight, modular agent framework from HuggingFace, SmolAgents is designed for rapid prototyping and deployment of specialized AI agents. Developed in **2024**, it offers an intuitive API and seamless integration with HuggingFace’s ecosystem. 🔗 [GitHub](https://github.com/huggingface/smolagents) | [Docs](https://huggingface.co/docs/smolagents) 9. **OpenAGI** An ambitious open-source platform aiming to push toward Artificial General Intelligence (AGI) by integrating LLMs with domain-specific expert models. Developed in **2023**, OpenAGI leverages reinforcement learning from task feedback to tackle complex, multi-step real-world tasks. 🔗 [GitHub](https://github.com/agiresearch/OpenAGI) 10. **Ollama** This is an open-source tool that allows you to download and run large language models locally on your computer. Developed to optimize both performance and data privacy, Ollama provides a simple, user-friendly interface for managing multiple LLMs on your hardware. By eliminating the need for cloud-based processing, it offers faster response times and greater control over model configurations, making it an excellent choice for developers and researchers looking to experiment with and deploy AI models locally. 🔗 [GitHub](https://github.com/ollama/ollama) 11. **HuggingFace Transformers** This is an open-source Python library that simplifies working with transformer-based models across a wide range of tasks, from natural language processing to audio and video processing. It provides seamless access to a vast collection of pre-trained models via the Hugging Face Model Hub, making it the go-to solution if you want to quickly implement state-of-the-art AI without building models from scratch. 🔗 [GitHub](https://github.com/huggingface/transformers) | [Docs](https://huggingface.co/docs/transformers) 12. **spaCy** spaCy is a fast, production-ready NLP library for Python, released in **2015**. It excels in tasks like tokenization, named entity recognition (NER), and dependency parsing, with pre-trained models for multiple languages. 🔗 [GitHub](https://github.com/explosion/spaCy) | [Docs](https://spacy.io/) 13. **NLTK (Natural Language Toolkit)** NLTK, released in **2001**, is a comprehensive NLP library designed for education and research. It offers tools for tokenization, stemming, lemmatization, and parsing, along with a vast collection of linguistic resources. 🔗 [GitHub](https://github.com/nltk/nltk) | [Docs](https://www.nltk.org/) --- ## Computer Vision / Image-Based Projects / Object Detection & Segmentation 1. **OpenCV (Open Source Computer Vision Library)** OpenCV is one of the most widely used libraries for computer vision tasks. It provides tools for image processing, object detection, facial recognition, and more. Developed in **2000**, it has become a cornerstone for both academic research and industrial applications. 🔗 [GitHub](https://github.com/opencv/opencv) | [Docs](https://docs.opencv.org/) 2. **YOLO (You Only Look Once)** YOLO is a state-of-the-art real-time object detection system. Known for its speed and accuracy, YOLO has gone through several iterations, with YOLO11 being the latest version as of today. It is widely used in applications like surveillance, autonomous vehicles, and robotics. 🔗 [GitHub](https://github.com/ultralytics/ultralytics) | [Docs](https://docs.ultralytics.com/) 3. **Stable Diffusion** Stable Diffusion is a generative AI model for creating high-quality images from text prompts. Released in **2022**, it has revolutionized the field of AI art and image generation. The model is open-source, allowing developers to fine-tune and deploy it for various creative and commercial applications. 🔗 [GitHub](https://github.com/CompVis/stable-diffusion) | [Docs](https://huggingface.co/docs/diffusers/index) 4. **Detectron2** Developed by Facebook AI Research (FAIR), Detectron2 is a powerful framework for object detection, segmentation, and other vision tasks. It is built on PyTorch and offers pre-trained models for quick deployment. Released in **2019**, it is widely used in research and industry. 🔗 [GitHub](https://github.com/facebookresearch/detectron2) | [Docs](https://detectron2.readthedocs.io/) 5. **MediaPipe** Developed by Google, MediaPipe is a framework for building multimodal (e.g., video, audio, and sensor data) applications. It includes pre-built solutions for face detection, hand tracking, pose estimation, and more. Released in **2019**, it is widely used for real-time vision applications. 🔗 [GitHub](https://github.com/google/mediapipe) | [Docs](https://ai.google.dev/edge/mediapipe/solutions/guidehttps://ai.google.dev/edge/mediapipe/solutions/guide) 6. **MMDetection** MMDetection is an open-source object detection toolbox based on PyTorch. It supports a wide range of models, including Faster R-CNN, Mask R-CNN, and YOLO. Developed in **2018**, it is part of the OpenMMLab project and is widely used in academic and industrial research. 🔗 [GitHub](https://github.com/open-mmlab/mmdetection) | [Docs](https://mmdetection.readthedocs.io/) 7. **Segment Anything Model (SAM)** Developed by Meta AI, SAM is a groundbreaking model for image segmentation. Released in **2023**, it can segment any object in an image with minimal input, making it highly versatile for applications in medical imaging, autonomous driving, and more. 🔗 [GitHub](https://github.com/facebookresearch/segment-anything) | [Docs](https://segment-anything.com/) 8. **Fast.ai** Fast.ai is a deep learning library that simplifies training and deploying computer vision models. It includes pre-trained models and high-level APIs for tasks like image classification and object detection. Released in **2016**, it is widely used for educational purposes and rapid prototyping. 🔗 [GitHub](https://github.com/fastai/fastai) | [Docs](https://docs.fast.ai/) 9. **OpenPose** OpenPose is a real-time multi-person keypoint detection library. It can detect human poses, hands, and facial keypoints in images and videos. Released in **2017**, it is widely used in applications like fitness tracking and animation. 🔗 [GitHub](https://github.com/CMU-Perceptual-Computing-Lab/openpose) | [Docs](https://cmu-perceptual-computing-lab.github.io/openpose/web/html/doc/) --- ## MLOps & Deployment 1. **Kubeflow** Kubeflow is the go-to open-source platform for deploying machine learning workflows on Kubernetes. Launched in **2017**, it simplifies scaling ML pipelines; from data preprocessing to model serving in cloud-native environments. 🔗 [GitHub](https://github.com/kubeflow/kubeflow) | [Docs](https://www.kubeflow.org/docs/) 2. **BentoML** BentoML streamlines deploying ML models into production with a unified framework for packaging, serving, and monitoring. Released in **2019**, it supports all major frameworks (PyTorch, TensorFlow, etc.) and integrates seamlessly with Kubernetes, AWS Lambda, or your custom infrastructure. 🔗 [GitHub](https://github.com/bentoml/BentoML) | [Docs](https://docs.bentoml.org/) 3. **Seldon Core** Seldon Core is a production-grade platform for deploying ML models at scale. Launched in **2017**, it converts models into REST/gRPC microservices, handles A/B testing, and monitors performance. It is perfect for enterprises needing reliability and governance in their AI systems. 🔗 [GitHub](https://github.com/SeldonIO/seldon-core) | [Docs](https://docs.seldon.io/projects/seldon-core/en/latest/) 4. **Feast** Feast (Feature Store) is an open-source tool for managing and serving ML features in production. Released in **2019**, it bridges the gap between data engineering and ML teams, ensuring consistent feature pipelines for training and real-time inference. 🔗 [GitHub](https://github.com/feast-dev/feast) | [Docs](https://docs.feast.dev/) 5. **Cortex** Cortex automates deploying ML models as scalable APIs on AWS, GCP, or Azure. Launched in **2020**, it handles everything from autoscaling to monitoring, letting you focus on building models instead of infrastructure. 🔗 [GitHub](https://github.com/cortexlabs/cortex) | [Docs](https://docs.cortexlabs.com/) --- ## Data Manipulation & Visualization 1. **Pandas** Pandas is the go-to Python library for **data manipulation and analysis**. Released in **2008**, it simplifies tasks like cleaning, transforming, and analyzing structured data with its intuitive DataFrame API. 🔗 [GitHub](https://github.com/pandas-dev/pandas) | [Docs](https://pandas.pydata.org/docs/) 2. **NumPy** The backbone of numerical computing in Python, NumPy (**2006**) powers everything from data science to deep learning. Its `ndarray` object handles multi-dimensional arrays and matrices, making it essential for efficient ML workloads. 🔗 [GitHub](https://github.com/numpy/numpy) | [Docs](https://numpy.org/doc/) 3. **Matplotlib** The granddaddy of Python visualization (**2003**), Matplotlib turns raw data into publication-quality plots. Pair it with Pandas for quick exploratory data analysis (EDA). 🔗 [GitHub](https://github.com/matplotlib/matplotlib) | [Docs](https://matplotlib.org/) 4. **Seaborn** Seaborn (**2012**) supercharges Matplotlib with sleek statistical visualizations. Perfect for heatmaps, distribution plots, and correlation matrices. 🔗 [GitHub](https://github.com/mwaskom/seaborn) | [Docs](https://seaborn.pydata.org/) 5. **Plotly** Plotly (**2013**) creates **interactive, web-ready visualizations**. Build dashboards, 3D plots, or geographic maps, all with Python or JavaScript. 🔗 [GitHub](https://github.com/plotly/plotly.py) | [Docs](https://plotly.com/python/) --- ## Step-by-Step Guide to Contributing to Open-Source Projects ![PHOTO-2025-03-15-11-11-16.jpg](PHOTO-2025-03-15-11-11-16.jpg) Contributing to open-source projects might feel intimidating at first, especially when you’re staring at a massive codebase. But don’t worry. Once you break it down into small steps, it becomes surprisingly straightforward. Let’s walk through the process together. --- ### Step 1: Find the Project’s GitHub Page Start by navigating to the project’s GitHub repository. For example, if you want to contribute to **Hugging Face Transformers**, search for “Hugging Face Transformers GitHub” or use the direct link from their documentation. 🔍 _Pro tip_: Most projects link their GitHub repo in their official documentation or website footer. Look for a tiny 🐙 or "View on GitHub" button! --- ### Step 2: Read the `CONTRIBUTING.md` File Every well-maintained project has a `CONTRIBUTING.md` file (sometimes named `CONTRIBUTORS.md` or `GUIDELINES.md`). This document is your cheat sheet because it explains _exactly_ how to contribute, including: - **Setup instructions** (e.g., how to install dependencies). - **Coding standards** (e.g., linting rules or testing requirements). - **Workflow guidelines** (e.g., how to submit a pull request). For example, here’s what you will see in **Swarmauri’s** `CONTRIBUTING.md`: ```markdown How to Contribute 1. Fork the Repository: - Navigate to the repository and fork it to your GitHub account. 2. Star and Watch: - Star the repo and watch for updates to stay informed. 3. Clone Your Fork: - Clone your fork to your local machine: git clone https://github.com/your-username/swarmauri-sdk.git 4. Create a New Branch: - Create a feature branch to work on: git checkout -b feature/your-feature-name 5. Make Changes: - Implement your changes. Write meaningful and clear commit messages. - Stage and commit your changes: git add . git commit -m "Add a meaningful commit message" 6. Push to Your Fork: - Push your branch to your fork: git push origin feature/your-feature-name 7. Write Tests: - Ensure each new feature has an associated test file. - Tests should cover: a. Component Type: Verify the component is of the expected type. b. Resource Handling: Validate inputs/outputs and dependencies. c. Serialization: Ensure data is properly serialized and deserialized. d. Access Method: Test component accessibility within the system. e. Functionality: Confirm the feature meets the project requirements. 8. Create a Pull Request: - Once your changes are ready, create a pull request (PR) to merge your branch into the main repository. - Provide a detailed description, link to related issues, and request a review. ``` :::warning _Don’t skip this step!_ Maintainers _love_ contributors who follow their guidelines. Ignoring the rules could lead to your Pull Request (PR) being rejected, even if your code is perfect. ::: --- :::tip{title="Pro Tip"} **What If There’s No `CONTRIBUTING.md`?** Don’t panic. Many projects are still evolving, and documentation might lag. Here’s how to navigate this: 1. **Check the `Issues` Tab**: Look for labels like `good-first-issue` or `help-wanted` as these are golden tickets for newcomers. For instance, **AgentGPT** doesn’t have a `CONTRIBUTING.md` yet, but its GitHub Issues are filled with tagged tasks, perfect for newcomers. 2. **Join the Conversation**: Head to the project’s **Discussions**, **Slack**, or **Discord** (linked in the repo’s “About” section). A quick “How can I help?” post often gets maintainers excited to guide you. 3. **Learn from Others**: Browse **recent pull requests** to see how contributors structured their code, wrote commit messages, or addressed feedback. Mimic their workflow to avoid rookie mistakes. If all of the above fails, open an issue asking, “How can I contribute?” Most maintainers will gladly point you to the right direction. 👍 ::: --- ### Step 3: Fork, Setup, and Submit Your First PR Time to roll up your sleeves. Let’s turn theory into action with a step-by-step walkthrough. --- **1. Fork the Repository** Forking creates your copy of the project on GitHub, allowing you to experiment without affecting the original codebase. **How to do it**: - Click the **“Fork”** button at the top-right of the project’s GitHub page. ![PHOTO-2025-03-18-13-39-09.jpg](PHOTO-2025-03-18-13-39-09.jpg) --- **2. Clone Your Fork Locally** Clone the forked repo to your machine to start coding: Open your terminal/command line and write the command below ```bash git clone https://github.com/your-username/project-name.git cd project-name ``` _Heads up_: Use the **SSH URL** if you’ve set up SSH keys for GitHub (fewer password prompts). --- **3. Set Up the Development Environment** Most projects require dependencies and configurations. Check the `CONTRIBUTING.md` or `README.md` for setup instructions. **Typical workflow**: ```bash # Create a virtual environment (avoid dependency conflicts) python -m venv venv source venv/bin/activate # Install dependencies pip install -r requirements.txt # Run tests to confirm everything works pytest tests/ ``` 💡 _Pro Tip_: If the project uses Docker, run `docker-compose up` for a hassle-free setup. --- **4. Create a Feature Branch** Never work directly on the `main` branch. Create a new branch for your changes: ```bash git checkout -b fix/typo-in-docs ``` 🚨 _Fun fact_: Branch names like `add-spaceship-emojis` are more memorable (and fun) than `patch-1`. --- **5. Make Changes & Commit** Now, code away 🚀. Once done, commit your changes with a **clear, concise message**: ```bash git add . git commit -m "Fix typo in quickstart guide" ``` 💥 _Golden rule_: One logical change per commit. No “fixed stuff” messages. --- **6. Write Tests (If Required)** Many projects require tests for new features. For example, **Swarmauri** mandates test coverage for every component. ```python # Example test for a new feature def test_new_feature(): result = my_function(input="test") assert result == "expected_output" ``` 📉 _Pain avoided_: Debugging failing tests now beats cryptic errors in code review later. --- **7. Push to Your Fork** Upload your branch to GitHub: ```bash git push origin fix/typo-in-docs ``` --- **8. Open a Pull Request (PR)** - Go to your fork’s GitHub page. - Click **“Compare & Pull Request”** next to your pushed branch. - Fill in the PR template: - **Title**: “Fix typo in quickstart guide” - **Description**: Explain _what_ you changed, _why_, and link related issues (e.g., “Closes #123”). - **Checklist**: Confirm tests pass and documentation is updated. 🎯 _Pro move_: Tag a maintainer (e.g., “@janedoe PTAL”) if the project’s guidelines allow it. --- **9. Respond to Feedback** Maintainers might request changes. Update your code, push to the same branch, and the PR auto-updates! ```bash git add . git commit -m "Address review feedback" git push origin fix/typo-in-docs ``` --- **After the PR is Merged** - 🎉 **Celebrate!** You’ve just contributed to open source. - 🔄 **Sync your fork**: Pull updates from the original repo to keep your fork fresh. - 🌱 **Stay involved**: Tackle another issue or help triage bugs. --- :::success{title="You Did It!"} Your first PR might feel like climbing Everest, but soon you will be sprinting up these hills. ## How to Stand Out as a Contributor and Build Your Brand 1. **Solve Meaningful Problems**: Focus on high-impact issues (bugs, feature requests) that users care about. Quality > quantity. 2. **Communicate Clearly**: Write detailed PR descriptions, link related issues, and respond promptly to feedback. If possible, use screenshots/GIFs to explain UI changes. 3. **Document Everything**: Fix typos, improve tutorials, or add code comments. Great docs are rare; your work will get noticed. 4. **Share Your Work**: Post about contributions on LinkedIn/Twitter, tag the project, and link to your PR. Example: _“Just added [feature] to @PyTorch! Learned [X], check out the PR 👇”_ 5. **Help Others**: Answer questions in Discussions/forums. Mentoring newcomers builds trust and visibility. 6. **Stay Consistent**: Regular contributions > one-off PRs. Even small fixes keep you on maintainers’ radar. --- :::tip{title="Final Thoughts"} Contributing to open-source AI/ML isn’t just about code. It’s about **learning**, **collaborating**, and **shaping the future** of technology. Whether you’re fixing a typo in PyTorch’s docs or building a new feature for LangChain, every contribution matters. **Your journey starts now**: 1. Pick a project from this list that excites you. 2. Fork it, tackle a `good-first-issue`, and submit that PR. 3. Share your wins (and lessons) with the community. The open-source world thrives on curiosity and courage. _P.S. Tag us when you land your first contribution, we would love to celebrate with you🤝_ :::
HrJ0xWtLzLNt
ready-tensor
cc-by-nc
Program Guide: Agentic AI Developer Certification by Ready Tensor
![agentic-ai-cert-hero.webp](agentic-ai-cert-hero.webp)--DIVIDER--Welcome to the **Agentic AI Developer Certification Program** by Ready Tensor! This is a 12-week, hands-on learning journey where you'll design, build, and deploy intelligent, goal-driven AI systems. This page provides all essential information about the program, including: - Program Overview - How to Enroll - How the Program Works - Program Schedule - Certification Process - Project Details - Team-Based Learning - What's Not Covered--DIVIDER-- # What You'll Learn This program is structured into three comprehensive modules, each culminating in a practical, portfolio-worthy project: - **Module 1 (Weeks 1–4): Foundations of Agentic AI** Explore core concepts including agent architectures, retrieval-augmented generation (RAG), and tool use. You'll build your first project—a question-answering assistant. - **Module 2 (Weeks 5–8): Architecting Agentic Workflows** Learn to implement complex workflows, multi-agent collaboration, and Model Context Protocol (MCP)-aligned systems. Your second project involves creating a sophisticated multi-agent system. - **Module 3 (Weeks 9–12): Real-World Readiness** Master guardrails, evaluation strategies, logging, documentation, and deployment using FastAPI or Streamlit. In your final project, you'll deliver a production-quality agentic AI system. --DIVIDER-- # How to Enroll Follow these steps to enroll in the program: 1. **Register for a free account** on Ready Tensor (if you don't have one already). [Sign-up here](https://app.readytensor.ai/signup). 2. **Enroll in the program**. Navigate to the [`Ready Tensor Certifications` hub](https://app.readytensor.ai/hubs/ready_tensor_certifications). 3. Near the top of the page, click the **"Request to Join"** button to request access. Note the button will not be visible unless you are logged into the platform (see Step 1 above). 4. Once your request to join is approved, you are officially enrolled! You'll have immediate access to program materials, including weekly lectures, reading materials, and project guidelines. The program begins on May 19th, 2025. You can enroll up until June 6th, the due date for the first project.--DIVIDER--# Certification is Free — Expert Feedback is Optional The certification program is completely free. All participants who complete the requirements will receive an official **Agentic AI Developer Certificate**. However, if you wish to receive **expert feedback and guidance** on your projects, you can subscribe to the **Pro** or **Team** plan on Ready Tensor. These plans include structured project reviews, personalized feedback, and direct support from AI experts to help you refine your work and grow faster.--DIVIDER--# How the Program Works Each week follows a consistent and engaging format: - **Weekly Lectures**: Pre-recorded sessions (30–60 minutes) that cover key concepts. Each lecture includes a Q&A segment based on participant questions from the previous week's content, hosted via our [Discord](https://discord.com/invite/vNevxPqGQS) server. - **Reading Materials**: Curated readings, publications, and templates provided weekly to deepen your understanding and support your project work. - **Weekly Assignments**: Practical, task-oriented assignments that progressively build toward each module’s project. These are not graded, but serve as structured practice to help you prepare. - **Community Engagement**: Join our [Discord server](https://discord.com/invite/vNevxPqGQS) for ongoing discussions, collaborative learning, and peer support. - **Project Submissions**: You’ll submit one main project per module (three in total), either individually or in teams of up to five. Projects must be published on a Ready Tensor platform to be eligible for grading.--DIVIDER--# Program Schedule The program schedule is as follows (also see attachment titled **Agentic_AI_Developer_Certification_Schedule.pdf**).--DIVIDER-- <h2>Module 1: Foundations of Agentic AI (Weeks 1–4)</h2> Lays the groundwork for understanding and constructing agentic systems. <h3>Week 1: Introduction to Agentic AI Systems (May 19, 2025)</h3> - What is Agentic AI? Definitions, terminology, and motivations - Core Components of Agentic AI - Real-world use cases and emerging trends - Tools and Frameworks - Differentiation of Agents and Workflows <h3>Week 2: Prompts, Embeddings and RAG (May 26, 2025)</h3> - Basic prompting - Introduction to RAG systems - Vector databases and embedding models (FAISS, Chroma, etc.) <h3>Week 3: Hands-On with LLM calls, workflows and RAG (June 02, 2025)</h3> - Making your first LLM call - Building a Workflow - Building a RAG system <h3>Week 4: Project 1 - Build a RAG-Powered AI App (June 09, 2025)</h3> - **Project-focused week with no new video lectures or required readings** - Participants work on building a question-answering or document-assistant app - Chain design: Prompt + Retrieval + Response - Integration with a vector store and basic evaluation loop - Optional: Add memory, tool usage, or intermediate reasoning - Deliverable: A simple RAG-based agent system with working retrieval and output - _Note: Participants may begin project work earlier during Weeks 2–3 if desired_ --DIVIDER--:::info{title="Module 1 Project Submission Deadline"} <h3>Your module 1 project is due by 11:59 pm UTC on June 13, 2025.</h3> :::--DIVIDER-- <h2>Module 2: Architecting Agentic AI Systems (Weeks 4–8)</h2> Focuses on building autonomous and collaborative agents using modular and extensible systems. <h3>Week 5: Agent Architectures & Planning Techniques (June 16, 2025)</h3> - Agent execution models: tool-using agents, reactive vs. deliberative - Planning mechanisms: zero-shot, few-shot, and learned planning - Introducing MCP: Model Context Protocol - Tool abstractions, APIs, and self-reflection - Introduction to LangGraph and directed workflow graphs - Building your first agentic workflow in LangGraph <h3>Week 6: Multi-Agent Systems & Collaboration (June 23, 2025)</h3> - Design patterns for multi-agent coordination - Communication protocols and messaging (e.g., broadcast, direct, shared memory) - Role assignment and inter-agent task delegation - Coordinated tool use and shared context - Use cases: decentralized planning, team-of-agents models - Best practices for evaluating multi-agent performance <h3>Week 7: Advanced Agent Evaluation Techniques (June 30, 2025)</h3> - Evaluating agent autonomy and reasoning quality - Measuring collaboration effectiveness in multi-agent systems - Human-in-the-loop testing and intervention - Benchmarking against baselines and predefined goals - Dataset creation for agent evaluation <h3> Week 8: Project 2 - Build a Multi-Agent System (July 07, 2025)</h3> - **Project-focused week with no new video lectures or required readings** - Participants design a system of modular, composable agents - Implement inter-agent communication and memory sharing - Apply LangGraph to orchestrate role-based agent workflows - Optionally incorporate persistence via memory layers or vector DBs - Deliverable: A functional, MCP-aligned multi-agent system capable of collaborative problem solving - _Note: Participants may begin project work earlier during Weeks 6–7 if desired_ :::info{title="Module 2 Project Submission Deadline"} <h3>Your module 2 project is due by 11:59 pm UTC on July 11th, 2025.</h3> :::--DIVIDER-- <h2>Module 3: Preparing Agentic AI for Real-World Use (Weeks 9–12)</h2> Equips participants with essential skills for building safe, evaluable, and deployable systems. <h3>Week 9: Guardrails, Evaluation, and Safety (July 14, 2025)</h3> - Prompt protection and safety frameworks (Guardrails.ai, Rebuff, etc.) - Input/output validation and structured output constraints - Defining evaluation metrics: success, efficiency, alignment - Instrumentation and logging (LangSmith, OpenTelemetry basics) - Case studies: agent failure modes and mitigation <h3>Week 10: Deployment & Scalability Considerations (July 21, 2025)</h3> - When and how to deploy agentic systems - Lightweight deployment: FastAPI + containers - Hosting options: Hugging Face Spaces, Render, Streamlit, Gradio - Vector DB hosting, rate-limits, and cost considerations - Monitoring basics: tracing, usage tracking, user feedback <h3>Week 11: Advanced Deployment Case Studies & Troubleshooting (July 28, 2025)</h3> - Scaling agents in production settings - Troubleshooting common deployment issues - Advanced observability and performance profiling - Security, reliability, and failover considerations - Real-world case studies and deployment architectures <h3> Week 12: Final Project - Production-Aware Agentic AI System (August 04, 2025)</h3> - **Project-focused week with no new video lectures or required readings** - Capstone project: Productionize your Week 8 multi-agent system - Add guardrails, logging, and simple deployment wrapper - Document limitations, assumptions, and intended use - Deliverable: A portfolio-ready, production-aware agentic AI application - _Note: Participants may begin final project work earlier during Weeks 10–11 if desired_ :::info{title="Module 3 Project Submission Deadline"} <h3>The final project is due by 11:59 pm UTC on August 8th, 2025.</h3> ::: --DIVIDER--# Certification Process To earn your **Agentic AI Developer Certificate**, you must: - Complete all three hands-on projects by their due dates. Publish each completed project publicly on the Ready Tensor platform, including comprehensive documentation and a repository link. - Achieve at least a 70% score per project based on the evaluation criteria provided in the **AAIDC Project Evaluation Criteria.pdf** attachment (**to be uploaded soon**).--DIVIDER--# Project Details Each project is designed to be a portfolio piece, showcasing your skills and understanding of agentic AI systems. The projects are described in detail in the **Agentic_AI_Developer_Certification_Projects.pdf** attachment. --DIVIDER-- # Team-Based Learning We strongly encourage participants to complete projects in teams (3–4 members recommended). This mirrors real-world professional workflows and maximizes skill diversity: - **AI/ML Theory Expert**: Knowledge of embeddings, transformers, and applied AI concepts. - **Programming Expert**: Skilled in Python, clean coding, and version control. - **Documentation Expert**: Adept at creating polished documentation and visuals. - **UI Expert**: Experienced in building professional-quality apps using Streamlit or Gradio. Solo projects are permitted but strongly discouraged. Team formation and collaboration are facilitated via our Discord community.--DIVIDER--# What's Not Covered This certification focuses specifically on agentic system development with existing models and APIs. It does **not** include: - Model training or fine-tuning - Self-hosting of foundation models - Full-scale ML-Ops or CI/CD pipelines - Enterprise-level security frameworks - Advanced front-end development --DIVIDER--
iERF3DYAwsD9
ready-tensor
cc-by-sa
Decade of AI and ML Conferences: A Comprehensive Dataset for Advanced Research and Analysis
![rag-hero-img.jpg](rag-hero-img.jpg)--DIVIDER--# Abstract The rapid growth of artificial intelligence (AI) and machine learning (ML) research has resulted in an overwhelming amount of academic literature, making efficient document retrieval crucial for researchers. In response to this challenge, we developed a Mini-Retrieval-Augmented Generation (Mini-RAG) system that leverages a comprehensive dataset compiled from major AI and ML conferences, including NeurIPS, ICML, ICLR, AAAI, and IJCAI, spanning from 2010 to 2023. This dataset, enriched with paper titles, abstracts, authors, publication years, and source URLs, enables users to perform document similarity searches and explore research trends. The system uses SentenceTransformer ("all-MiniLM-L6-v2") to generate high-quality embeddings, combined with FastAPI for efficient, user-friendly document retrieval. Designed to be scalable and adaptable, this project aims to streamline research by enhancing access to relevant literature through advanced natural language processing techniques.--DIVIDER--# The Dataset Our dataset comprises a meticulously compiled collection of research papers from top-tier AI and ML conferences such as NeurIPS, ICML, ICLR, AAAI, and IJCAI, covering publications from 2010 to 2023. This rich dataset serves as the foundation for our document similarity system, ensuring that users have access to a wide range of research topics and trends within the field. The dataset contains the following columns: - id: A unique identifier for each paper in the dataset.<br><br> - paper_name: The title of the paper, which often encapsulates the key findings or focus of the research.<br><br> - authors: The list of authors who contributed to the paper, providing insight into the collaborative nature of the work.<br><br> - year: The year the paper was published, helping to contextualize the research within the timeline of developments in the field.<br><br> - url: A link to the paper, allowing for direct access to the full content for further reading or verification.<br><br> - abstract: A concise summary of the paper’s content, which is crucial for the similarity search as it provides a quick overview of the research focus and findings.<br><br> - source: The conference from which the paper was sourced, giving context to the type of research and the audience it was presented to. This dataset not only facilitates historical analysis of AI and ML advancements but also supports a variety of applications in academic research and industry contexts, making it an invaluable resource for document retrieval systems. You can find and download the dataset in the resources section --DIVIDER--# Methodology This section centers around creating a system that can embed a dataset of documents and perform similarity searches, enabling users to quickly retrieve the top n similar documents given a query. The backbone of this system is the SentenceTransformer model, specifically the "all-MiniLM-L6-v2" variant, known for its efficiency and effectiveness in generating high-quality sentence embeddings.--DIVIDER--## How It Works The process begins with the creation of a database containing the document embeddings. We used a straightforward Python script to load the dataset, generate embeddings for each document using the SentenceTransformer model, and then store these embeddings in a dictionary for easy retrieval. The FastAPI framework powers the inference service, enabling users to input a document and receive a list of the most similar documents from the database. The service computes the cosine similarity between the query document and the documents in the database, returning the top n matches.--DIVIDER--## Key Features - Efficient Embedding Generation: The use of SentenceTransformer ensures that embeddings are generated quickly and accurately, making the system suitable for real-time applications. <br><br> - Scalable and Extendable: The architecture is designed to be easily extendable, allowing for the integration of larger models or additional features as needed.<br><br> - User-Friendly API: With FastAPI, the system provides a straightforward interface for querying and retrieving documents, making it accessible even for those with minimal technical expertise.--DIVIDER--## Examples To demonstrate the power of this system, here are a couple of examples showcasing the results: :::info{title="Query Document:"} We propose a method for producing ensembles of predictors based on holdout estimations of their generalization performances. This approach uses a prior directly on the performance of predictors taken from a finite set of candidates and attempts to infer which one is best. Using Bayesian inference, we can thus obtain a posterior that represents our uncertainty about that choice and construct a weighted ensemble of predictors accordingly. This approach has the advantage of not requiring that the predictors be probabilistic themselves, can deal with arbitrary measures of performance and does not assume that the data was actually generated from any of the predictors in the ensemble. Since the problem of finding the best (as opposed to the true) predictor among a class is known as agnostic PAC-learning, we refer to our method as agnostic Bayesian learning. We also propose a method to address the case where the performance estimate is obtained from k-fold cross validation. While being efficient and easily adjustable to any loss function, our experiments confirm that the agnostic Bayes approach is state of the art compared to common baselines such as model selection based on k-fold cross-validation or a linear combination of predictor outputs. ::: :::tip{title="Most Similar Documents:"} 1. Ensembling is among the most popular tools in machine learning (ML) due to its effectiveness in minimizing variance and thus improving generalization. Most ensembling methods for black-box base learners fall under the umbrella of "stacked generalization," namely training an ML algorithm that takes the inferences from the base learners as input. While stacking has been widely applied in practice, its theoretical properties are poorly understood. In this paper, we prove a novel result, showing that choosing the best stacked generalization from a (finite or finite-dimensional) family of stacked generalizations based on cross-validated performance does not perform "much worse" than the oracle best. Our result strengthens and significantly extends the results in Van der Laan et al. (2007). Inspired by the theoretical analysis, we further propose a particular family of stacked generalizations in the context of probabilistic forecasting, each one with a different sensitivity for how much the ensemble weights are allowed to vary across items, timestamps in the forecast horizon, and quantiles. Experimental results demonstrate the performance gain of the proposed method.<br><br> 2. Virtually any model we use in machine learning to make predictions does not perfectly represent reality. So, most of the learning happens under model misspecification. In this work, we present a novel analysis of the generalization performance of Bayesian model averaging under model misspecification and i.i.d. data using a new family of second-order PAC-Bayes bounds. This analysis shows, in simple and intuitive terms, that Bayesian model averaging provides suboptimal generalization performance when the model is misspecified. In consequence, we provide strong theoretical arguments showing that Bayesian methods are not optimal for learning predictive models, unless the model class is perfectly specified. Using novel second-order PAC-Bayes bounds, we derive a new family of Bayesian-like algorithms, which can be implemented as variational and ensemble methods. The output of these algorithms is a new posterior distribution, different from the Bayesian posterior, which induces a posterior predictive distribution with better generalization performance. Experiments with Bayesian neural networks illustrate these findings. ::: --DIVIDER--# Using the RAG For using the Mini-RAG on your custom dataset, refer to the [github repository](https://github.com/readytensor/rt_mini_rag) and follow the instructions in the README.md file. It is very easy to use.--DIVIDER--# Conclusion The dataset we have compiled, encompassing over a decade of pioneering research from premier AI and machine learning conferences, is a critical asset for the academic and industrial research communities. It represents not just a collection of data points, but a comprehensive overview of the evolution and trends within the field of artificial intelligence over an extensive period. This rich, detailed dataset enables users to explore and analyze the trajectory of AI research, providing a historical context and a benchmark for future studies. Furthermore, the integration of this dataset with the Mini-Retrieval-Augmented Generation (Mini-RAG) system exemplifies the practical application of advanced NLP technologies to enhance document retrieval capabilities. By leveraging the SentenceTransformer model, the system efficiently sifts through complex data, facilitating the retrieval of relevant documents based on semantic similarity. This not only accelerates the research process by enabling quicker access to pertinent studies but also showcases the synergy between well-curated datasets and cutting-edge technology in pushing the boundaries of information retrieval in AI research. The project highlights the transformative potential of combining rich datasets with robust models to create powerful tools for the academic and research community.
JNgtglsVpvrj
rahul.parajuli27
none
Publish Like a Pro: Essential Steps for a Perfect Ready Tensor Publication
![Minimalist Process Product Research Flowchart (2).png](Minimalist%20Process%20Product%20Research%20Flowchart%20(2).png)--DIVIDER--# Checklist for a High-Quality Ready Tensor Publication --DIVIDER--*Publishing on Ready Tensor is an exciting opportunity to share your AI models, datasets, and research with a global community. To maximize the impact of your work, ensure clarity, and enhance reusability, follow this checklist to refine and optimize your publication.*--DIVIDER--:::tip{title="Have you marked them before submitting?"} - [x] Craft an Engaging Title - [x] Pick an Appropriate License - [x] Add Relevant Tags - [x] Use an Engaging Hero Image - [x] Follow Best Practices for Technical Content - [x] Link Your Repository - [x] Ensure a Well-Structured Repository - [x] Upload Any Relevant Files - [x] Run the Automated Assessment :::--DIVIDER--**1. Craft an Engaging Title** Your title is the **first impression** of your work, make it count! A compelling title should be concise, descriptive, and clearly convey the main contribution of your research or project. **2. Pick an Appropriate License** Choosing the right open-source license is crucial for setting clear usage terms. Consider how you want others to use and modify your work, and refer to Ready Tensor’s [License Guide](https://app.readytensor.ai/publications/licenses-for-ml-projects-a-primer-qWBpwY20fqSz) to select the most suitable option. **3. Add Relevant Tags** Tags help users discover your work more easily. Use specific and accurate tags that reflect your research domain, methodology, and key concepts. **4. Use an Engaging Hero Image** A visually appealing hero image makes your publication more attractive and sets the tone for your work. Choose an image that represents your research well, whether it's a model visualization, dataset sample, or conceptual diagram. **5. Follow Best Practices for Technical Content** Creating a well-structured and readable publication is essential. Refer to Ready Tensor’s [best-practice publications on technical writing and presentation](https://app.readytensor.ai/publications/engage-and-inspire-best-practices-for-publishing-on-ready-tensor-SBgkOyUsP8qQ) to ensure clarity, consistency, and completeness in your content. **6. Link Your Repository** Transparency and reproducibility are key! Provide a direct link to your GitHub, GitLab, or other code repositories so others can access, understand, and build upon your work. **7. Ensure a Well-Structured Repository** A well-organized repository increases the usability of your project. Ensure your repository includes: - A detailed **README file** with a clear description, installation steps, usage instructions, and dataset details. - A logical file structure that makes it easy to navigate. - Code **documentation** and **comments** to help users understand and extend your work effortlessly. - **Dependencies** listed in `requirements.txt` or `project.toml` or similar file. - **License file** clarifying the rights and permissions you are granting to others. **8. Upload Any Relevant Files** To make your publication fully self-contained, include all necessary supporting files such as datasets, configuration files, model weights, and scripts. **9. Run the Automated Assessment** Before submitting, use Ready Tensor’s **automated assessment tool** to check for common issues and improvement suggestions. While not all recommendations are mandatory, addressing key feedback can significantly enhance the quality of your publication.--DIVIDER--# Key Considerations for an Outstanding Publication **Promote Open-Source & Transparency** Ready Tensor encourages openness and collaboration. A well-documented, linked repository helps others learn from and contribute to your work. **Cater to Your Target Audience** You’re publishing for the community, not just for yourself. Make your content accessible to different levels of expertise by writing in a clear and engaging manner. **Make Your Publication Stand Out** Ready Tensor hosts numerous high-quality publications. Highlight what makes yours unique by showcasing its real-world impact, use cases, or innovative aspects. **Learn from Other Publications** Explore existing publications to see what works well. When your publication goes live, you’ll find similar publications listed at the bottom of your page—use these as references for inspiration. **Borrow Ideas, but Give Credit** Feel free to take inspiration from other successful publications, but always provide proper attribution when referencing other works. --DIVIDER--# Final Thoughts By following this checklist, you ensure that your Ready Tensor publication is not only informative but also engaging and accessible. A well-structured and well-presented publication increases visibility, enhances credibility, and fosters meaningful collaborations within the AI community. Start refining your work today and share it with the world!
kwFKTldV27nA
ready-tensor
cc-by-nc
Agentic AI Developer Certification Program: Welcome & Orientation
![AAIDC-welcome and orientation.png](AAIDC-welcome%20and%20orientation.png)--DIVIDER--# 👋 Welcome to the Program! We’re so glad you’re here. This is the first stop in your journey through the Agentic AI Developer Certification Program (AAIDC). Think of this page as your orientation hub: quick intro to the people, purpose, and plan behind what you’re about to experience. Watch the videos below to get familiar with the team, understand why we built this program, and learn how to make the most of it. Let’s get started. 🚀 -----DIVIDER--# 👋 Meet the Instructor & Program Creators You’ll be learning from **Abhyuday Desai, Ph.D.**, the founder of Ready Tensor. He’s led AI/ML teams across industries for 20 years and now he’s here to guide you through the program. You’ll also meet the amazing team that brought this program to life: Victory (our curriculum lead), and a crew of AI/ML engineers who’ve supported everything from content development to tool/template creation and community outreach. 🎥 **Watch this video** to hear about the people behind AAIDC, their backgrounds, and why they care deeply about this program. :::youtube[Title]{#akI__I-QK0Q} --- --DIVIDER--# 💡 Why We Created This Program This program was born from demand. After launching our global Agentic AI competition and seeing 700+ incredible submissions, we heard one question over and over: > “How do I actually get started with Agentic AI?” We created this program to answer that question, and to invite you to help build the very tools we're working on ourselves. 🎥 **Watch this video** to learn what inspired the program and how it fits into our broader vision at Ready Tensor. :::youtube[Title]{#YbGhI8dmpiw} --- --DIVIDER-- # 📚 What You’ll Learn This program is structured into three modules over 12 weeks: Module 1 – Foundational concepts of agentic AI Module 2 – Building agentic AI systems Module 3 – Production-readiness and deployment 🎥 Watch this video for three key takeaways that will help you understand what this program is really about - beyond just the syllabus. :::youtube[Title]{#AED7U5VN19U} -----DIVIDER-- # 🛠️ How the Program Works (And How to Get the Most Out of It) This isn’t a lecture-first course. Instead, we give you projects, then let you figure things out, just like you would on the job. You’ll have weekly content drops to guide you, but you’re free to learn at your own pace and in your own style. The goal is to **build**, not just watch and read. Here’s how the weekly flow works: - 📅 **Lectures for the week** will be posted **before Monday** - 🎥 **Lecture videos** will be up by **Tuesday** - ❓ **Q/A video** (answering common questions from the previous week) also drops on **Tuesdays** <h2> Important note: </h2> The **videos won’t walk you through the lectures line by line**. Instead, we’ll highlight what to focus on, point out common pitfalls, and raise key questions to think about. We’re not here to spoon-feed you. We’re here to guide you, like a manager would guide their team. So here’s our recommendation: - Start with the project description - Finalize your project idea and team - Make a plan and divide the work - Learn what you need to get the job done - Use the lectures, tools, or any other resource you prefer - And just **build something great** 🎥 **Watch this video** for a quick overview of how to approach the program and make the most of it. :::youtube[Title]{#gV3xr6coF0s} -----DIVIDER-- # 💬 Join the Community on Discord If you get stuck, have a question, or just want to see what others are working on, **Discord is the place to be**. In the community, you can: * Ask questions and get help * Share your project progress * See how others are approaching the same challenges * Stay updated with program announcements It’s a great way to learn from others and not feel like you’re going through the program alone. 🔗 [Join the Discord Server](https://discord.com/invite/EsVfxNdUTR) -----DIVIDER--# ✅ That’s It for Orientation You now have a sense of what this program is, who’s behind it, and how to get started. Head over to Module 1 when you’re ready, and begin exploring the first project. If you have questions along the way, the Discord community is a great place to ask and connect. Let’s get to work. --DIVIDER-- --- [➡️ Next - Module 1 Project Description](https://app.readytensor.ai/publications/4n07ViGCey0l) ---
ljGAbBceZbpv
ready-tensor
cc-by-sa
Distance Profile for Time-Step Classification in Time Series Analysis
![distance_profile_hero.png](distance_profile_hero.png)--DIVIDER--TL;DR: Distance Profile is a versatile and powerful technique in time series analysis. In this work, we apply it to a task we define as Time-Step Classification, where the goal is to classify individual time steps within a time series. Our approach demonstrates its effectiveness and potential for broader applications in this domain.--DIVIDER-- # Abstract Time series analysis often requires classifying individual time points, a task we term Time-Step Classification. This publication explores the application of Distance Profile, an existing versatile technique in time series analysis, to this challenge. We adapt the Distance Profile method, using MASS (Mueen's Algorithm for Similarity Search) for efficient computation, specifically for Time-Step Classification. Our approach leverages the intuitive concept of nearest neighbor search to classify each time step based on similar sequences. We present our implementation, including modifications for multivariate time series, and demonstrate its effectiveness through experiments on diverse datasets. While not achieving the highest accuracy compared to complex models like LightGBM, this adapted method proves valuable as a strong baseline and quick prototyping tool. This work aims to highlight Distance Profile as a simple yet versatile approach for Time-Step Classification, encouraging its broader adoption in practical time series analysis.--DIVIDER--# Introduction Time series data is ubiquitous, from stock prices to sensor readings, but analyzing it presents unique challenges. One such challenge is Time-Step Classification - labeling each point in a time series. While many complex methods exist, sometimes the most intuitive approaches yield impressive results. In this paper, we explore Distance Profile, a method rooted in the simple concept of nearest neighbor search. We show how this straightforward idea becomes a powerful tool for Time-Step Classification: 1. We introduce Distance Profile and its applications in time series analysis. 2. We detail our implementation using the MASS algorithm, including adaptations for multivariate time series. 3. We demonstrate its effectiveness through experiments on various datasets. By showcasing how this simple, intuitive method can tackle complex time series challenges, we aim to highlight its value for establishing baselines and quick prototyping. Our work serves as a practical guide, encouraging practitioners to consider Distance Profile alongside more advanced techniques in their analytical toolkit. --DIVIDER--# Distance Profile Distance Profile is a fundamental technique in time series analysis that measures the similarity between a query subsequence and all possible subsequences of a longer time series. This method is crucial for various tasks such as pattern recognition, anomaly detection, and classification in time series data. ## Definition The distance profile of a query subsequence $Q$ with respect to a time series $T$ is a vector where each element represents the distance between $Q$ and a corresponding subsequence of $T$. Formally: - Let $T$ be a time series of length $n$. - Let $Q$ be a query subsequence of length $m$. - The distance profile $D$ is an$(n-m+1)$ length vector. - Each element $D[i]$ represents the distance between $Q$ and the subsequence of $T$ starting at index $i$. ## Computation The most commonly used distance measure for calculating the distance profile is the z-normalized Euclidean distance, which is robust against variations in scale and offset. The computation involves two key steps: 1. **Z-Normalization**: Each subsequence of the time series $T$ and the query $Q$ is individually normalized to have zero mean and unit variance. <br/> 2. **Distance Computation**: The Euclidean distance between the normalized $Q$ and each normalized subsequence of $T$ is calculated and stored in the distance profile vector. <br/> --DIVIDER--:::info{title="Info"} While z-normalized Euclidean distance is common, other distance metrics can be used, such as cosine similarity, Manhattan distance, or Minkowski distance. The choice of metric can be treated as a tunable hyperparameter, optimized for the specific requirements of the downstream task. :::--DIVIDER--## Distance Profile Example **Sample Dataset for Demonstration** To illustrate the application of Distance Profile in Time-Step Classification, we will use a real-world time series representing daily weather data for the city of Los Angeles. This dataset spans three full years, from 2020 to 2022, and includes various meteorological parameters such as temperature, humidity, and wind speed. We chose this dataset for its accessibility and clear seasonal patterns, making it ideal for demonstrating how Distance Profile identifies similar patterns across different time periods. Later in the experiments section, we will work with more typical, complex datasets to thoroughly evaluate the method's performance. The dataset is uploaded in the **Resources** section of this article. See file titled `los_angeles_weather.csv`. We will use the feature series titled `maxTemp` for our demonstration. Below is the plot of the daily maximum temperature in Los Angeles over the three years. It illustrates the distinct temperature patterns and seasonal fluctuations in the city, providing a rich dataset for analyzing time-series patterns.--DIVIDER-- ![los_angeles_maxtemp_2020_2022.png](los_angeles_maxtemp_2020_2022.png)--DIVIDER--**Query Subsequence** The dataset provides an excellent example for demonstrating how Distance Profile can identify similar patterns within a time series. For our demonstration, we select a query period representing the first 10 days in the dataset (i.e., starting on January 1st and ending on January 10th, 2020.) The following chart shows the maximum temperatures over the query period. The following chart shows the maximum temperatures over the query period.--DIVIDER-- ![query_period_maxtemp_2020.png](query_period_maxtemp_2020.png)--DIVIDER--The goal of this example is to identify other similar temperature patterns throughout the three-year period using Distance Profile. By applying Distance Profile to this query subsequence, we can explore how well the technique can locate similar temperature trends within the broader time series. This exercise not only showcases the practical utility of Distance Profile but also demonstrates its effectiveness in identifying meaningful patterns in real-world weather data. <br/>--DIVIDER--**Implementation using NumPy** The following code is a simple implementation of the distance profile algorithm on a one-dimensional series. ```python import numpy as np def z_normalize(ts): """Z-normalize a time series.""" return (ts - np.mean(ts)) / np.std(ts) def sliding_window_view(arr, window_size): """Generate a sliding window view of the array.""" return np.lib.stride_tricks.sliding_window_view(arr, window_size) def distance_profile(query, ts): """Compute the distance profile of a query within a time series.""" query_len = len(query) ts_len = len(ts) # Z-normalize the query query = z_normalize(query) # Generate all subsequences of the time series subsequences = sliding_window_view(ts, query_len) # Z-normalize the subsequences individually subsequences = np.apply_along_axis(z_normalize, 1, subsequences) # Compute the distance profile distances = np.linalg.norm(subsequences - query, axis=1) return distances # You can now apply the above functions to your temperature data # by passing the relevant query and time series arrays. # Compute the distance profile dist_profile = distance_profile(query, time_series) ```--DIVIDER--:::info{title="Note"} This numpy code provided above is for illustration purposes. For a more efficient implementation, use `matrixprofile` or `stumpy` python packages. The package `stumpy` offers Mueen’s Algorithm for Similarity Search (MASS) for fast and scalable distance profile. Using it, the code simplifies as follows: ```python import stumpy # ... read your data and create the query and time_series numpy arrays # query is a 1d numpy array # time_series is a 1d numpy array distance_profile = stumpy.core.mass(query, time_series) ``` :::--DIVIDER--The following chart displays the distance profile for the given query and time_series. The 3 nearest-neighbors are time-windows starting on May 29th, 2020, December 3rd, 2020, and May 30th, 2021. These are the locations in the time series where the distance profile values are the lowest, indicating the most similar subsequences to the query.--DIVIDER-- ![distance_profile.png](distance_profile.png)--DIVIDER--Next, we visualize and compare the patterns in the 3 nearest neighbors with the original query in the following chart.--DIVIDER-- ![query_and_neighbors_2x2.png](query_and_neighbors_2x2.png)--DIVIDER--We can observe the similarities between the query and its nearest neighbors. The query time series shows a slight upward trend during the first 6 days, followed by a downward trend over the next 4 days. The three nearest neighbors exhibit similar patterns, effectively capturing the essence of the query subsequence. It may seem surprising that the nearest neighbors to the query period from January 1st to January 10th, 2020, are not all from the same time of year (winter). In fact, two of the nearest neighbors fall in late May and early June. For example, while the average temperature during the query period is 19.8°C, the nearest neighbor on May 29, 2020, has an average temperature of 26.6°C. This occurs because both the subsequences and the query are z-normalized before calculating the distance profile. Z-normalization removes magnitude differences, allowing the distance profile to focus on the shape of the temperature curve rather than the absolute values. This approach enables the identification of similar patterns in the data, regardless of differences in scale or offset.--DIVIDER--## MASS and STUMPY Distance Profile involves calculating the distance between a query subsequence and all possible subsequences within a time series. While the basic concept can be implemented using NumPy, as shown above, this approach can become computationally expensive, especially for large datasets. To address this, Mueen's Algorithm for Similarity Search (MASS) was developed as an optimized and highly efficient method for computing the distance profile. MASS leverages the Fast Fourier Transform (FFT) to significantly speed up the computation, making it well-suited for large-scale time series data. Essentially, MASS is a fast implementation of the Distance Profile algorithm, providing the same results but with much greater efficiency. By using the `stumpy` package, which implements MASS, we can achieve scalable and rapid distance profile, enabling its use in real-world applications where performance and speed are critical.--DIVIDER-- ## Multi-Dimensional Distance Profile The concept of distance profile can be extended to multivariate time series data, where each time point consists of multiple features or channels. This extension is crucial for performing similarity searches on multivariate time series, a common requirement in many real-world applications where data is collected across multiple channels simultaneously. To compute a multi-dimensional distance profile, we can take one of two approaches: 1. **Summing Individual Distance Profiles**: Calculate the distance profile for each feature separately and then sum them to form a multi-dimensional distance profile. <br/> 2. **Direct Multivariate Euclidean Distance**: Compute the multivariate Euclidean distance directly across all features. <br/> In our work, we opted for the first approach—summing the distance profiles of individual features. We acknowledge that this choice was somewhat arbitrary, and the impact of this decision on the results could be an interesting area for further exploration. We utilized Mueen’s Algorithm for Similarity Search (MASS) to calculate the multi-dimensional matrix profile. Here’s how you can implement this approach: ```python def multi_dimensional_mass( query_subsequence: np.ndarray, time_series: np.ndarray ) -> np.ndarray: """ Calculate the multi-dimensional matrix profile. Args: query_subsequence (np.ndarray): The query subsequence. time_series (np.ndarray): The time series. Returns: np.ndarray: The multi-dimensional matrix profile. """ for dim in range(time_series.shape[1]): if dim == 0: profile = stumpy.core.mass( query_subsequence[:, dim], time_series[:, dim] ) else: profile += stumpy.core.mass( query_subsequence[:, dim], time_series[:, dim] ) return profile ``` --DIVIDER-- # Time-Step Classification Time-step classification is a challenging task in time series analysis, where the goal is to assign a label to each individual time point within a sequence. This type of classification is crucial in various real-world applications, where the temporal dynamics of the data play a significant role in understanding and predicting outcomes. The following are a couple of examples where time-step classification is applied: 1. **Human Activity Recognition**: In wearable technology and smart devices, time-step classification is used to identify and categorize human activities such as walking, running, or sitting, based on sensor data collected over time. Each time step in the sensor data corresponds to a specific activity label, enabling real-time monitoring and analysis. <br/> 2. **ECG Signal Classification**: In medical diagnostics, time-step classification is applied to ECG signals to detect and classify heartbeats as normal or indicative of various arrhythmias. Each time step in the ECG signal represents a moment in the cardiac cycle, and correctly labeling these steps is crucial for accurate diagnosis and treatment. <br/>--DIVIDER--## Problem Definition Time-step classification involves assigning a label to each time step within a sequence, whether the data is univariate or multivariate. The dataset for this task typically includes the following characteristics: - **Input Features**: The data consists of time series, which can be either univariate (single feature) or multivariate (multiple features). - **Label Assignment**: For each time step, a specific label needs to be assigned, indicating the class or category of that particular time point. - **Training and Inference Data**: - **Training Data**: Contains sequences that are fully labeled, providing the model with both the input features and the corresponding labels. - **Test (Inference) Data**: Contains sequences without labels, where the model needs to predict the label for each time step. - **Multiple Samples**: The dataset may include multiple sequences, each representing different instances or subjects. For example, in Human Activity Recognition (HAR), each sequence might correspond to a different person performing various activities, with labels indicating the specific activity at each time step. - **Variable Sequence Lengths**: The length of sequences can vary across both training and test data, meaning that each sample may have a different number of time steps. Our goal is to train a model on the labeled training data so that it learns to accurately assign labels to each time step in the test data.--DIVIDER--## Distance Profile for Time-Step Classification In this section, we explore how the Distance Profile technique, particularly through Mueen’s Algorithm for Similarity Search (MASS), can be adapted and applied to the task of time-step classification. By calculating the distance profile for each time step, we can effectively classify individual time points within a time series, enabling more precise and informed analysis across various domains. The general approach is as follows: 1. **Subsequence Querying**: For each sequence in the test dataset, we break it down into smaller subsequences, or "queries." Each query represents a window of time steps within the sequence that we want to classify. <br/> 2. **Finding Nearest Neighbors**: For each query, we calculate its distance profile against the training dataset, identifying its k-nearest neighbors—subsequences in the training data that most closely match the query in terms of shape and pattern. <br/> 3. **Label Assignment**: The labels of these k-nearest neighbors are then used to assign a label to each time step in the query. This allows us to classify each time point in the test sequence based on the most similar patterns observed in the labeled training data. <br/> --DIVIDER--## Implementation Details To adapt the MASS algorithm for Time-Step Classification, we made several key modifications to effectively handle the nuances of this task. These modifications ensure that the algorithm can accurately classify each time step in the test data by leveraging the labeled training data. Below are the critical components of our implementation: **Windows** Each sequence (i.e. sample) in the test data is divided into smaller windows to create the subsequences (queries) that will be classified. The window length is a tunable parameter, determined as a function of the minimum sequence length in the training data. This approach allows us to capture relevant patterns while maintaining consistency across varying sequence lengths. **Strides** To ensure comprehensive coverage of the test data, we allow overlapping windows to be created. The degree of overlap is controlled by a stride factor, enabling us to balance between computational efficiency and the thoroughness of the classification. **Distance Profile Calculation** For each window in the test data, we compute the distance profile over all subsequences from all samples in the training data. This is done using the MASS algorithm, which calculates the Euclidean distance on z-normalized data for each feature. The final distance measure for each subsequence is obtained by summing the distances across all features, ensuring that all aspects of the multivariate time series are considered. **k-Nearest Neighbors** Once the distance profile is calculated for each window in the test data, we identify the k-nearest neighbors from the training data based on the computed distances. These neighbors represent the most similar windows in the training set. The labels associated with these neighbors, which are one-hot encoded, are extracted for further processing. **Averaging Labels** A single time step in the test data may appear in multiple query windows, and for each window, we have k-nearest neighbors subsequences from the training data. To determine the final label for each time step index _i_ within a query, we average the labels from corresponding index _i_ across all the neighbor subsequences. This approach produces a set of label probabilities, from which the most likely label is assigned to the time step. --DIVIDER--:::info{title="Info"} The complete implementation of our approach is available in our [GitHub repository](https://github.com/readytensor/rt_tspc_distance_profile). It is also linked in the **Models** section of this publication. The implementation is designed in a generalized way, allowing users to easily apply it to their own datasets. Additionally, the implementation is dockerized for convenience, though users can also run it locally if they prefer. The implementation leverages the STUMPY library. :::--DIVIDER--## Limitations of the Approach While the Distance Profile method for Time-Step Classification offers simplicity and interpretability, it has several limitations: - **Computational Expense**: For large datasets, calculating distance profiles can be computationally intensive, potentially limiting scalability. - **Local Pattern Focus**: Predictions depend entirely on the k-nearest neighbors identified. If these neighbors contain noisy/anomalous data (in features or labels), it can lead to noisy predictions. - **Parameter Sensitivity**: Results can be sensitive to the choice of distance metric and the number of nearest neighbors ($k$), requiring careful tuning. - **Computational Burden During Inference**: Unlike models that learn during a training phase, this method performs all its computations during the inference phase. This can lead to slower predictions on large datasets compared to other complex models which, though potentially slow to train, are typically quick to make predictions once trained. These limitations should be considered when applying this approach, particularly for large-scale or complex time series classification tasks.--DIVIDER--# Experiments We tested the distance profile algorithm for time-step classification on five benchmarking datasets: EEG Eye State, HAR70+, HMM Continuous (synthetic), Occupancy Detection, and PAMAP2. These datasets, along with additional information about them, are available in the [GitHub repository](https://github.com/readytensor/rt_datasets_time_step_classification), which is also linked in the **Datasets** section of this publication.--DIVIDER-- ## Evaluation Results The performance of the distance profile model was evaluated using a variety of metrics, including accuracy, weighted and macro precision, weighted and macro recall, weighted and macro F1-score, and weighted AUC. The results for each dataset are summarized in the table below: | Dataset Name | Accuracy | Weighted Precision | Macro Precision | Weighted Recall | Macro Recall | Weighted F1-score | Macro F1-score | Weighted AUC Score | | ----------------------------------- | :------: | :----------------: | :-------------: | :-------------: | :----------: | :---------------: | :------------: | :----------------: | | EEG Eye State | 0.611 | 0.869 | 0.545 | 0.611 | 0.628 | 0.718 | 0.584 | 0.625 | | HAR70+ | 0.641 | 0.64 | 0.47 | 0.641 | 0.369 | 0.641 | 0.414 | 0.742 | | HMM Continuous Timeseries Dataset | 0.641 | 0.614 | 0.594 | 0.641 | 0.552 | 0.627 | 0.572 | 0.818 | | Occupancy Detection | 0.893 | 0.892 | 0.885 | 0.893 | 0.834 | 0.893 | 0.859 | 0.972 | | PAMAP2 Physical Activity Monitoring | 0.616 | 0.657 | 0.681 | 0.616 | 0.606 | 0.636 | 0.641 | 0.929 | As is common in benchmarking studies, we observe varying performance across different datasets. This variation likely reflects the inherent predictability of each dataset rather than specific strengths or weaknesses of the Distance Profile method. All models, including more complex ones, typically face similar patterns of relative difficulty across datasets. Next, we compare the results of the Distance Profile model with those of LightGBM, a top-performing model in a comparative analysis conducted by this publication.--DIVIDER-- ## Comparison with LightGBM For comparison, we now present the results from one of the top-performing model, LightGBM, from a comparative analysis conducted in this publication. --DIVIDER--| Dataset Name | Accuracy | Weighted Precision | Macro Precision | Weighted Recall | Macro Recall | Weighted F1-score | Macro F1-score | Weighted AUC Score | |---------------------------------------|:--------:|:------------------:|:---------------:|:---------------:|:------------:|:-----------------:|:--------------:|:------------------:| | EEG Eye State | 0.458 | 0.857 | 0.523 | 0.458 | 0.566 | 0.597 | 0.544 | 0.581 | | HAR70+ | 0.862 | 0.87 | 0.55 | 0.862 | 0.496 | 0.866 | 0.522 | 0.859 | | HMM Continuous Timeseries Dataset | 0.876 | 0.875 | 0.868 | 0.876 | 0.85 | 0.876 | 0.859 | 0.974 | | Occupancy Detection | 0.996 | 0.996 | 0.992 | 0.996 | 0.997 | 0.996 | 0.994 | 0.998 | | PAMAP2 Physical Activity Monitoring | 0.731 | 0.741 | 0.737 | 0.731 | 0.716 | 0.736 | 0.726 | 0.951 | --DIVIDER--**Note**: Detailed results for all models in the Time Step Classification benchmark are available in this [GitHub repository](https://github.com/readytensor/rt_tspc_lightgbm). --DIVIDER--The LightGBM model also shows performance variability across datasets, with a notable correlation to the Distance Profile method's results. For instance, both models achieve their highest performance on the Occupancy Detection dataset. Overall, LightGBM outperforms the Distance Profile method. The average of Macro Average F1-score for the Distance Profile model is 0.614, compared to 0.729 for LightGBM. This performance gap can be attributed to LightGBM's greater complexity and expressiveness, allowing it to capture more intricate data patterns than the simpler Distance Profile method. Despite not matching LightGBM's accuracy, the Distance Profile model remains valuable for establishing benchmarks and quick prototyping. We recommend using it as a reference point during the development of more sophisticated models.--DIVIDER--# Summary Distance Profile is a simple and versatile tool in time series data mining. It works by calculating the distance between a query subsequence and all other subsequences within a time series, forming the foundation for advanced analytical tasks. We utilized Mueen's Algorithm for Similarity Search (MASS) for its efficiency and scalability, making it ideal for large real-world datasets. The process involves: - Z-normalizing the time series and query to manage scale variations. - Computing the Euclidean distance for each subsequence against the query. - Supporting both univariate and multivariate data for comprehensive analysis. While Distance Profile may not always achieve the highest accuracy compared to more complex models, it is invaluable for establishing strong baselines. Its simplicity and adaptability make it a must-have tool before advancing to more sophisticated methods. Beyond time-step classification, Distance Profile is also effective for anomaly detection, motif discovery, and time series segmentation. Its broad applicability makes it an essential component of any data scientist's toolbox. --DIVIDER--# References 1. Law, Sean M. "STUMPY: A powerful and scalable Python library for time series data mining." Journal of Open Source Software 4, no. 39 (2019): 1504. Available at: https://stumpy.readthedocs.io. 2. Zhong, Sheng, and Abdullah Mueen. "MASS: distance profile of a query over a time series." Data Mining and Knowledge Discovery (2024): 1-27.
LX9cbIx7mQs9
ready-tensor
cc-by-sa
Markdown for Machine Learning Projects: A Comprehensive Guide
![markdown-for-documentation.svg](markdown-for-documentation.svg)--DIVIDER--# Overview This comprehensive guide focuses on using Markdown for documentation in machine learning projects. Markdown is an invaluable tool that facilitates the creation of readable and easy-to-follow documentation. In the complex and collaborative world of machine learning, clear and consistent documentation is essential. Markdown excels in this role by offering a straightforward, widely-adopted format that can be easily shared and understood by both technical and non-technical team members. The guide covers the fundamentals of Markdown and its specific applications in AI and machine learning contexts. It provides resources for leveraging Markdown to improve project documentation, enhance collaboration, and streamline workflows. From basic syntax to advanced features like LaTeX integration, this guide caters to both seasoned data scientists and those new to the field of AI, enabling the creation of machine learning projects that are not only technically robust but also easy to understand and navigate.--DIVIDER--# Introduction to Markdown This section provides an introduction to Markdown and its significance in Machine Learning projects. **What is Markdown?** Markdown is a lightweight markup language used to add formatting elements to plaintext text documents. Designed for readability and ease of use, its primary purpose is to be as straightforward as possible for both writing and reading. Markdown allows for the creation of lists, links, tables, bold and italic text, and more, all using plain text characters. Markdown files, typically saved with the .md extension, can be converted to various output types including HTML, PDF, and Word documents. **Why Use Markdown?** Markdown has become a popular choice for documentation in machine learning projects for several key reasons: 1. **Readability**: The syntax is designed to be easily readable and writable, crucial when dealing with complex machine learning projects that require substantial documentation. 2. **Flexibility**: Markdown can be converted into many other file formats such as HTML and PDF, facilitating easy sharing and presentation of documents. 3. **Ubiquity**: Widely used in data science and machine learning communities, Markdown is found in GitHub README files, Jupyter notebooks, blogs, and documentation. 4. **Integration**: Many text editors and content management systems support Markdown natively or via plugins, simplifying the process of writing and rendering Markdown text. **Importance in Machine Learning Projects** Documentation plays a pivotal role in machine learning projects. The complexity of these projects necessitates documenting not just code, but also data schemas, preprocessing decisions, model configurations, experiment results, and other critical details. Good documentation aids in project maintenance and collaboration, and Markdown serves as a reliable tool to achieve this. Common use cases of Markdown in ML project documentation include: - **README Files**: Providing project overviews, installation instructions, and usage examples. - **Tutorials and Guides**: Documenting processes for environment setup, data preprocessing, model training, and evaluation. - **API Documentation**: Creating reference documentation for ML libraries or APIs. - **Jupyter Notebook Documentation**: Annotating code, providing explanations, and describing experiment results. - **Model Documentation**: Describing model architecture, hyperparameters, training methodology, and performance metrics. - **Changelogs**: Tracking updates, new features, bug fixes, and other modifications over time. By leveraging Markdown for these use cases, clear, well-formatted, and easily maintainable documentation can be created for ML projects. Markdown's compatibility with various tools and platforms contributes to its popularity among developers and data scientists in the field of machine learning. --DIVIDER--# Markdown Editors Now that we understand the importance and purpose of Markdown in machine learning projects, let's take a look at some tools that can make our Markdown writing experience even more enjoyable and efficient. The tools we'll discuss in this section are called Markdown editors. Markdown editors are essentially text editors with added features that make writing Markdown more convenient. The features range from syntax highlighting, which makes it easier to see and understand your Markdown structure, to preview functions that allow you to see the rendered output of your Markdown text in real-time. Here are some of the most popular Markdown editors used in the data science community: --DIVIDER-- **Jupyter Notebook** Jupyter Notebook is a popular tool among data scientists and researchers. It is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations, and narrative text. Jupyter supports Markdown, which means you can include rich text to explain your data and code right next to the cells where the analysis occurs. **Visual Studio Code** Visual Studio Code (VS Code) is a free, open-source, and powerful editor that supports a myriad of programming languages. It also has excellent support for Markdown. VS Code has a Markdown preview feature, which lets you see the rendered output of your Markdown file while you're writing. **Dillinger** Dillinger is a free, cloud-enabled, open-source Markdown editor that operates in your web browser. It provides an immediate preview of your Markdown as you type. You can also export your documents as Markdown, HTML, or PDF, and it even offers integration with popular platforms like GitHub, Dropbox, Google Drive, and OneDrive. Since it's browser-based, Dillinger is a great option when you're working across different devices or don't want to install a dedicated editor. **Typora** Typora is a minimalistic Markdown editor that offers a seamless experience between writing and previewing. Unlike many other Markdown editors, Typora doesn't separate the writing interface from the preview interface, instead of providing a real-time preview as you type. Typora is proprietary software (not open-source), but it's free to use during its beta phase (which is 14 days at the time of writing this guide). **Markdown Pad** Markdown Pad is a full-featured, commercial Markdown editor for Windows with built-in real-time preview. It also includes a feature to export to HTML and PDF. Each of these editors has its own strengths and features, and the best one for you depends on your needs and working style. Some people prefer the simplicity and immediacy of Typora, while others appreciate the extensive customization and programming support in Atom or VS Code.--DIVIDER--# How to write in Markdown To get started with writing in Markdown, follow these steps: 1. Create a text file with a `.md` extension. For example, you can name it `README.md`. While the `.md` extension is not mandatory, it is the conventional choice for Markdown files. 2. You can use any text editor to create and edit Markdown files. However, for the best experience, it is recommended to use a dedicated Markdown editor. Once you have your Markdown file ready, you can begin adding content to it. The beauty of Markdown lies in its simplicity. You can use special characters to format your text, such as denoting headings, bold text, links, lists, and more. In the following sections, we cover these special formatting rules or syntax in detail.--DIVIDER-- **Headers** In Markdown, you use the hash (#) symbol to create a heading. The number of hash symbols indicates the level of the heading. For example: ```markdown # Heading 1 ## Heading 2 ### Heading 3 ``` **Emphasis** You can make text bold or italicized by using asterisks (`*`) or underscores (`_`). Single `*` or `_` will italicize text, and double `**` or `__` will make it bold. For example: ``` *This text will be italic* _This will also be italic_ **This text will be bold** __This will also be bold__ ``` When rendered, the above text will look like this: _This text will be italic_ _This will also be italic_ **This text will be bold** **This will also be bold**--DIVIDER-- **Lists** To create an unordered list, you can use asterisks, pluses, or hyphens interchangeably. An ordered list can be created simply by numbering each line: ``` * Item 1 * Item 2 * Item 2a * Item 2b 1. Item 1 2. Item 2 3. Item 3 ``` The rendered markdown will look as follows: - Item 1 - Item 2 - Item 2a - Item 2b 1. Item 1 2. Item 2 3. Item 3 --DIVIDER--**Links and Images** You can create a hyperlink by wrapping the link text in brackets [ ], and then wrapping the link in parentheses ( ). ```markdown [Google](http://google.com) ``` In the example above, `Google` is the link text, and `http://google.com` is the link url. The link text is what will be displayed in the rendered Markdown, while the link url is the actual url that the link will point to. The above Markdown will render as: [Google](http://google.com) Note that you can directly type a url in the text without using the link syntax. For example, typing the url `http://google.com` as-is will be rendered as http://google.com (i.e. with a functional hyperlink). However, it is recommended to use the link syntax for readability in your raw Markdown file. To add an image, you follow a similar syntax but add an exclamation mark at the beginning: ```markdown ![Ready Tensor Logo](/images/logo.png) ``` In this example, `Ready Tensor Logo` is the alt text, and `/images/logo.png` is the image url denoted using a relative path. The alt text is what will be displayed in the rendered Markdown if the image fails to load.--DIVIDER-- **Code Blocks and Inline Code** One of the key advantages of Markdown is its ability to format code. This is crucial in the context of machine learning projects, where it's often necessary to present code snippets along with mathematical equations and technical explanations. For inline code, you can use single backticks. This is particularly useful when referring to a function or a variable in your text. For instance, \`model.fit()\` would render as `model.fit()`. If you have larger blocks of code, you can wrap your code in triple backticks (**```**) and optionally specify the programming language for syntax highlighting. Here's an example: ```` ```python import numpy as np import pandas as pd df = pd.read_csv('data.csv') print(df.describe()) ``` ```` This renders as: ```python import numpy as np import pandas as pd df = pd.read_csv('data.csv') print(df.describe()) ``` By specifying the language (like `python` in this example), you enable syntax highlighting, which makes the code more readable.--DIVIDER-- **Blockquotes** You can indicate blockquotes with the `>` character: ```markdown > This is a quote ``` --DIVIDER-- **Tables** Tables can be created in Markdown using a combination of hyphens and vertical bars. The hyphens are used to define the header row and separate it from the content rows, while the vertical bars are used to separate each cell within the table. To create a table, follow this syntax: ```markdown | Header 1 | Header 2 | Header 3 | | -------- | -------- | -------- | | Cell 1 | Cell 2 | Cell 3 | | Cell 4 | Cell 5 | Cell 6 | ``` In the example above, the first row represents the table header. The hyphens separate the header row from the content rows. Each cell is enclosed within vertical bars. The rendered table will look like this: | Header 1 | Header 2 | Header 3 | | -------- | -------- | -------- | | Cell 1 | Cell 2 | Cell 3 | | Cell 4 | Cell 5 | Cell 6 | Ensure that each column in the header row aligns with the respective columns in the content rows. The number of hyphens in the header row should match the number of columns. Adding a colon (`:`) to the hyphens in the header row can align the column content (e.g., `| :--- |` for left alignment, `| :---: |` for center alignment, `| ---: |` for right alignment). --DIVIDER-- **Horizontal Rules** You can create a horizontal rule by using three hyphens (`---`), asterisks(`***`), or underscores(`___`). For example, consider the following: ``` This is the first paragraph. --- This is the second paragraph. ``` This will render as: This is the first paragraph. --- This is the second paragraph.--DIVIDER-- **Line Breaks** In Markdown, you can create a line break using two trailing spaces at the end of a line or by using the HTML tag `<br/>`. Here's an example: ```markdown This is the first line. And this is the second line. This is another first line.<br/> And this is another second line. ``` Note that we have entered two spaces at the end of the first line, i.e. after the period in the text `This is the first line.`. This is to indicate a line break. We have also used the HTML tag `<br/>` to indicate a line break at the end of the sentence `This is another first line.` In both cases, the rendered Markdown will have a line break where specified. The rendered Markdown will look as follows: This is the first line. And this is the second line. This is another first line.<br/> And this is another second line. :::info{title="Info"} A single newline doesn't create a new paragraph or line break. This might be different from what you're used to in other text editors, but it's a feature of Markdown to allow easier line-wrapping in the source code. ::: --DIVIDER-- **Escape Characters** If you want to use any special characters which are used in the Markdown syntax, you can use a backslash: ```markdown \*This text will appear as it is, without any formatting\* ``` In this example, we have escaped the asterisks (`*`) by using a backslash (`\`). Without the backslash, the asterisks would have been interpreted as Markdown syntax and the text would have been rendered as italicized text.--DIVIDER-- **Comments** Even though Markdown does not support comments directly, you can use HTML syntax for comments, which will be ignored by the Markdown parser: ```markdown <!-- This is a single-line comment --> <!-- This is a multi-line comment. You can write as much as you want here. --> ``` These comments will not appear in the rendered Markdown. They're useful for leaving notes to yourself or to others who might be reading the raw Markdown. --DIVIDER-- # Using LaTeX Syntax for Equations in Markdown When it comes to writing mathematical equations in your documents, Markdown on its own can be a bit limiting. Fortunately, we can incorporate LaTeX, a powerful typesetting system widely used for technical and scientific documents, right within our Markdown documents. This is particularly useful for machine learning and data science projects where it's common to discuss mathematical concepts. Integrating LaTeX with Markdown allows us to render complex mathematical equations neatly. While Markdown takes care of the overall document structure and prose, LaTeX focuses on the mathematical components, ensuring they are clearly and accurately displayed. --DIVIDER--**Inline Equations** For inline equations, you can embed your LaTeX code within single dollar signs. For instance, the LaTeX code `$E=mc^2$` renders as $E=mc^2$. --DIVIDER--**Display Equations** For larger equations, or when you want the equation to be on a separate line, you use double dollar signs. For example, `$$y = mx + b$$` will render as: $$ y = mx + b $$--DIVIDER--**Basic LaTeX Syntax for Equations** LaTeX offers a vast array of symbols and structures for mathematical notation. Here are a few basics: - Superscripts and subscripts can be written using `^` and `_`, respectively. For example, `$x_i^2$` renders as $x_i^2$. Note that `$x^2_i$` also renders as $x^2_i$ - Fractions can be written using the `\frac` command. For example, `$\frac{a}{b}$` renders as $\frac{a}{b}$. - The square root can be written using the `\sqrt` command. For instance, `$\sqrt{a}$` renders as $\sqrt{a}$.--DIVIDER-- **Commonly Used LaTeX Commands in Machine Learning** In machine learning documentation, you often encounter Greek letters, summation symbols, and more. Here's how you can express these in LaTeX: - Greek letters are written as `\alpha`, `\beta`, `\gamma`, etc., for lowercase, and `\Alpha`, `\Beta`, `\Gamma`, etc., for uppercase. For instance, `$\alpha$` renders as $\alpha$. - The summation symbol can be written using the `\sum` command. For example, `$\sum_{i=1}^{n} x_i$` renders as $\sum_{i=1}^{n} x_i$. - The integral symbol can be written using the `\int` command. For instance, `$\int_{a}^{b} f(x) \, dx$` renders as $\int_{a}^{b} f(x) \, dx$. - The product symbol can be written using the `\prod` command. For example, `$\prod_{i=1}^{n} x_i$` renders as $\prod_{i=1}^{n} x_i$. By combining these LaTeX syntax elements, you can construct complex mathematical formulas for your machine learning documentation. Let's use an example from machine learning, the formula for the Gaussian distribution: $$ f(x) = \frac{1}{\sqrt{2\pi\sigma^2}} e^{ - \frac{(x-\mu)^2}{2\sigma^2} } $$ This formula contains several mathematical symbols and structures, and it can be written in LaTeX as: ```latex $$ f(x) = \frac{1}{\sqrt{2\pi\sigma^2}} e^{ - \frac{(x-\mu)^2}{2\sigma^2} } $$ ``` --DIVIDER-- # Using Markdown for Project README Files The README file is often the first point of interaction for anyone exploring your project. It's crucial that this document clearly communicates the purpose of the project, how to install and use it, any dependencies, and other pertinent information. Here, we discuss how to use Markdown effectively to create README files. 1. **Project Name**: Start with the name of your project at the top of the document. It's customary to use a H1 or H2 header for this. 2. **Project Description**: Provide a short description explaining what your project is about. This helps visitors quickly understand the purpose of your project. 3. **Installation Instructions**: Include a detailed step-by-step guide on how to install your project. Use code blocks to indicate commands that should be run in the terminal. 4. **Usage Guide**: Detail how to use your project. This could include examples of the project in action. If your project is a library, show examples of it being used in code. 5. **Contributing**: If your project is open-source and you're open to contributions, detail how others can contribute. This could include how to submit pull requests, create issues, and any code style requirements. 6. **License**: If your project has a license, state this in your README and include a copy of the license in your project. 7. **Contact Information**: Provide contact information so interested parties can reach out with questions, suggestions, or collaboration opportunities. 8. **Acknowledgments**: You may want to include a section to thank or acknowledge the work of others that contributed to your project. Here's a sample structure for a README file: ```markdown # Project Name ## Description A short description about the project. ## Installation Detailed installation instructions. ## Usage A guide on how to use the project, with examples. ## Contributing Guidelines on how to contribute to the project. ## License Information about the license. ## Contact Your contact information. ## Acknowledgments Acknowledgments for contributors or similar. ``` Remember, your README should be as simple or as detailed as necessary for others to understand and use your project.--DIVIDER-- # Converting Markdown to Other Formats Markdown documents are highly versatile and can be easily converted into various other formats for diverse uses such as presenting, sharing, or publishing. This is especially useful when you want to share your work with a larger audience or in a more formal setting. Below, we discuss some common conversion options and the tools that facilitate them. 1. **Markdown to HTML**: This is one of the most common conversions. Many Markdown editors provide this functionality, but you can also use command-line tools like [Pandoc](https://pandoc.org/) and Jupyter's `nbconvert`. For example, to convert a file named `example.md` to HTML using Pandoc, you would use the command: `pandoc example.md -s -o example.html`. With `nbconvert`, you can convert a Jupyter notebook to HTML using: `jupyter nbconvert --to html example.ipynb`. 2. **Markdown to PDF**: Converting Markdown files to PDF is particularly useful when you need a portable, easily shareable version of your document. Tools like [Typora](https://typora.io/) offer this functionality built-in. With Pandoc, you can convert a markdown file to PDF using a command like: `pandoc example.md -s -o example.pdf`. With `nbconvert`, you can convert a Jupyter notebook to PDF using: `jupyter nbconvert --to pdf example.ipynb`. 3. **Markdown to Word**: Sometimes, it might be useful to convert your Markdown file into a Word document, especially when collaborating with non-technical team members or clients who prefer using Word. This can also be achieved using Pandoc with the command: `pandoc example.md -s -o example.docx`. 4. **Markdown to Presentation Formats**: Markdown can even be converted into presentation formats like PowerPoint or reveal.js slides, which can be especially handy when you want to present your work to a wider audience. For example, to convert a Markdown file to PowerPoint with Pandoc, you would use the command: `pandoc example.md -t pptx -o example.pptx`. Remember that the `-s` option in the Pandoc commands mentioned above stands for `--standalone`, which means Pandoc will produce a standalone document with an appropriate header and footer (as opposed to a fragment of a document). Furthermore, Jupyter's `nbconvert` allows you to convert Jupyter notebooks, which support Markdown, into a variety of formats like HTML, LaTeX, PDF, and others. By converting your Markdown documents to different formats, you can ensure that your work is accessible and presentable to various audiences in different contexts.--DIVIDER-- # Best Practices for Markdown in ML projects When incorporating Markdown into your machine learning projects, the following best practices can be helpful: 1. **Maintain Consistency**: To enhance readability, decide on a style for various elements like headers, lists, emphasis and continue using it throughout the document. 2. **Use Headers Wisely**: Structure your document logically using headers. Headers guide the reader and provide a sense of what to expect from each section of the document. 3. **Be Concise**: Break down complex ideas into digestible chunks. Use bullet points and numbered lists to present information clearly and concisely. 4. **Include Relevant Code Blocks**: Code blocks offer context and practicality to your document. Use inline code for variables and short snippets, and fenced code blocks for larger ones. 5. **Utilize Links and Images**: Images and links can significantly improve the quality of your documentation. Use descriptive alt text for images for accessibility. 6. **Utilize LaTeX for Mathematical Expressions**: Machine learning projects often involve complex mathematical equations. LaTeX syntax in Markdown can make these equations more comprehensible. 7. **Keep README Comprehensive**: A README file gives an overview of the project. Ensure it is comprehensive and covers all aspects including installation, usage, contributions, etc. 8. **Regularly Update Documentation**: As your project evolves, so should your documentation. Regular updates ensure relevance and usefulness. Always remember that the purpose of using Markdown in your machine learning projects is to make your work more understandable and accessible. Consider your end reader when creating your documentation. --DIVIDER-- # Summary In this comprehensive guide, we've explored the role of Markdown in creating comprehensive documentation for machine learning projects. Topics covered include the basics of Markdown, its syntax, using LaTeX for equations, best practices, crafting README files, and converting Markdown to other formats. With its simple syntax and versatile use, Markdown can enhance documentation practices, making your work more accessible to both technical and non-technical audiences. Armed with your new knowledge of Markdown, you're now prepared to create clear and user-friendly documentation for your machine learning projects. -----DIVIDER-- # References 1. [Markdown Guide](https://www.markdownguide.org/) - A free and open-source reference guide that explains how to use Markdown, the simple and easy-to-use markup language you can use to format virtually any document. 2. [Mastering Markdown](https://guides.github.com/features/mastering-markdown/) - GitHub's guide to mastering Markdown, a comprehensive resource for learning Markdown syntax and use cases. 3. [Jupyter Notebook](https://jupyter.org/documentation) - Official documentation for Jupyter Notebook, an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. 4. [Visual Studio Code](https://code.visualstudio.com/docs/languages/markdown) - Visual Studio Code's guide to Markdown, offering insights on how to leverage the popular code editor for Markdown documents. 5. [Dillinger](https://dillinger.io/) - A powerful online Markdown editor and viewer. 6. [Typora](https://typora.io/) - The official website for Typora, a minimal Markdown editor. 7. [LaTeX Wikibook](https://en.wikibooks.org/wiki/LaTeX) - A Wikibook offering a detailed guide on LaTeX for high-quality typesetting. 8. [Pandoc](https://pandoc.org/) - The official website for Pandoc, a universal document converter. --DIVIDER--
of4NZ9yVgKja
ready-tensor
cc-by-sa
Foundational LLMs in Timeseries Forecasting: A Benchmarking Study
![wordcloud.png](wordcloud.png)--DIVIDER--# Foundational Models in Timeseries Forecasting: A Benchmarking Study ## Abstract This ongoing project at Ready Tensor features a comprehensive benchmarking analysis of foundational time series models, starting with the **Chronos** family from **Amazon** Science and **MOIRAI** by **Salesforce**. Our study offers a detailed comparison of foundational forecasting models against 23 other leading models from our extensive database, covering neural networks, machine learning, statistical methods, and naive approaches. The evaluation criteria include performance measured by RMSSE across 24 datasets, execution durations, memory usage, hyperparameter sensitivity, and the comparative sizes of Docker images for deployment. By integrating these foundational models, our project aims to uncover their unique advantages in zero-shot learning, generalization across diverse dataset frequencies, and operational efficiencies against the backdrop of traditional and contemporary forecasting techniques. ## Introduction Recent emergence of foundational models like Chronos, MOIRAI, Moment, and TimesFM introduces a new paradigm in forecasting, promising improved accuracy and generalization capabilities. Ready Tensor's project evaluates these models against traditional forecasting methods to understand their effectiveness and operational efficiency. This project incorporates performance comparisons using the RMSSE metric, execution time, and memory usage. We also perform sensitivity analysis on model hyperparameters. As an additional criterion for consideration for deployment, we report the sizes of dockerized images for each model in this project. This approach helps identify the most efficient and accurate models for practical deployment in diverse settings. By focusing on foundational models, we aim to provide insights into their role in simplifying forecasting pipelines and enhancing predictive accuracy. ## Architectural Overview and Model Sizes For detailed architectural insights and methodologies behind the Chronos and Moirai models, readers are directed to their respective publications: - **Chronos** paper: [Chronos: Learning the Language of Time Series](https://arxiv.org/pdf/2403.07815) - **Moirai** paper: [Unified Training of Universal Time Series Forecasting Transformers](https://arxiv.org/pdf/2402.02592) The model sizes, quantified in terms of the number of trainable parameters, vary significantly across different configurations, impacting both their performance and computational demands: - **Chronos Sizes**: - `chronos-t5-tiny`: 8 million parameters - `chronos-t5-mini`: 20 million parameters - `chronos-t5-small`: 46 million parameters - `chronos-t5-base`: 200 million parameters - `chronos-t5-large`: 710 million parameters - **Moirai Sizes**: - `moirai-R-small`: 14 million parameters - `moirai-R-base`: 91 million parameters - `moirai-R-large`: 311 million parameters Understanding the scale of these models is crucial for users to anticipate the resource needs and potential deployment scenarios, balancing the trade-offs between computational efficiency and forecasting accuracy. ## Forecast Accuracy Our analysis, represented through a heatmap chart, compares the RMSSE performance of 5 Chronos models and 3 Moirai models against 23 other leading forecasting models across 24 datasets. Refer to the page for project [Ready Tensor Forecasting Benchmark](https://app.readytensor.ai/projects/EeNv3K1byb1V20OLZbBOd) for a description of the datasets, evaluation method, and metrics used in this analysis. Models can be compared on a number of metrics, including RMSE, RMSSE, MAE, MASE, sMAPE, WAPE, and R-squared. For this analysis, we focus on RMSSE, a scaled version of RMSE that compares a model's performance to a naive baseline. Note that lower RMSSE scores indicate better forecasting performance. The following heatmap displays the average RMSSE scores for each model, grouped by dataset frequency. The results are filtered to 31 models for brevity, including the five Chronos models and 3 Moirai models which are at the bottom of the chart. ![Forecasting Models RMSSE Heatmap](https://github.com/readytensor/rt_forecasting_foundational_models/blob/main/outputs/moirai/moirai_forecasting_models_heatmap.png?raw=true) **Key Findings:** - The analysis reveals consistently strong performance for the Chronos models across different time frequencies. This demonstrates the Chronos models' adeptness at handling diverse forecasting scenarios without prior training on those specific datasets. - The Chronos-T5-Large model emerges as a standout, demonstrating exceptional forecasting accuracy. This places it on par with the best of traditional machine-learning and neural network models. - The Moirai models have been integrated into this benchmark. The Moirai base and large models each achieved an average RMSSE score of 0.80, indicating solid performance though not surpassing the top models like Chronos-T5-Large. The Moirai small model scored a 0.89, reflecting challenges in robustness or generalization compared to its larger counterparts. - A performance gradient is observed among the foundational models, with larger models generally performing better, a trend also seen within the Chronos and Moirai families. These results underscore the promising potential of foundational models in forecasting, aligning with trends seen in other domains like natural language processing. The insights from the Chronos and Moirai models affirm the evolving landscape where large, pretrained models increasingly define the frontiers of accuracy and applicability in forecasting tasks. :::alert{type=info} **Note on Forecast Length**: The Chronos documentation indicates that the current Chronos checkpoints (03/24/2024 at the time of this writing) work best with prediction_length <= 64. In our benchmark, 5 out of 24 datasets exceed the threshold. ::: ### Potential Train-Test Leakage When evaluating foundational models such as Chronos and Moirai, it's essential to consider the possibility of train-test leakage. This issue arises if benchmarking datasets, like samples from the M4 competition, have also been used during the development of these models. Such overlap could result in models being indirectly 'trained' on data that is later used for their evaluation. Addressing this challenge is complex. As we continue to incorporate more foundational models into our analysis, finding a large enough benchmark untouched by any model becomes increasingly challenging. However, the results of our evaluations are still valuable for understanding the relative performance of these models and offer insights into their effectiveness across diverse datasets. Users should remain mindful of this potential bias when interpreting results and making decisions based on these evaluations. ## Execution Durations and Memory Usage Our analysis extends to the execution durations (training and prediction) and memory usage (CPU and GPU) of the Chronos models compared with other models. We particularly focused on the **Air Quality 2018** dataset, the largest among the 24 datasets in our benchmark, to underscore the differences in execution time and resource utilization more distinctly. ### Benchmarking Infrastructure The study utilized two types of machines from the AWS platform based on the requirement for GPU acceleration: - For models not requiring GPU acceleration, `c6a.4xlarge` instances were used, featuring 16 vCPUs, 32.0 GiB memory, with AMD EPYC 7R13 processors. - Models requiring GPU acceleration were run on `g5.2xlarge` instances, equipped with 8 vCPUs, 32.0 GiB memory, 24.0 GiB video memory, powered by AMD EPYC 7R32 processors, and utilizing NVIDIA A10G GPUs. ### Tracking Methodology Ideally, the training and prediction tasks would be executed multiple times (3 or 5 times) to report the minimum observed values for durations and memory usage. However, to manage compute costs, metrics from a single run are reported in this study. The reported CPU memory usage primarily tracks Python memory through `tracemalloc`, which may not fully represent the total memory footprint. Specifically, this tracking does not include memory consumed by underlying processes, such as those executed in C/C++ by imported modules. Consequently, the actual CPU memory utilization could be higher than what is reported. It should be noted upfront that observed differences in execution times and memory requirements may not solely reflect the computational demands of the forecasting models themselves but also the preprocessing overhead introduced by their respective libraries. Models leveraging libraries like MLForecast, NeuralForecast, Skforecast, GluonTS and Darts incorporate distinct preprocessing steps, which could significantly impact overall execution times and memory usage. ### Execution Time and Memory Usage Comparison See the following chart for a comparison of the prediction times and memory usage of the Chronos and Moirai models against other models: ![Execution Times and Memory Usage by Forecasting Models](https://github.com/readytensor/rt_forecasting_foundational_models/blob/main/outputs/moirai/moirai_model_exec_durations_and_memory.png?raw=true) Key observations from the data include: - **Training Times:** Both the Chronos and Moirai models, as zero-shot learners, do not require training, leading to zero training time and memory usage. - **Prediction Times:** The Chronos-T5-Large model's inference time is significantly longer than all other models, including Moirai, with a prediction time of 51.7 seconds for the Air Quality dataset. In contrast, the Moirai Large model takes 27.8 seconds, performing faster than Chronos but slower compared to traditional methods. - **CPU Memory Usage:** The Base and Large Moirai models exhibit higher CPU memory requirements than the Chronos models during prediction. - **GPU Memory Usage:** There is a notable difference in GPU memory consumption between the models. The Chronos-T5-Large model consumes a substantial 13.8 GB, while the Moirai Large model uses significantly less, with only 2.4 GB required. This reflects Moirai's more efficient use of GPU resources during prediction tasks. These metrics highlight the operational demands and efficiencies of foundational models, with Moirai models presenting a lower resource footprint at inference compared to Chronos. This section underscores the need to balance the advanced predictive capabilities of such models against their computational resource requirements, especially in GPU-intensive environments. ## Hyperparameter Impact Analysis ### Chronos-T5-Large Model We conducted an analysis of how changes in hyperparameters affect the forecasting accuracy of the Chronos-T5-Large model. This analysis focused on four key hyperparameters: `num_samples`, `top_p`, `top_k`, and `temperature`. The default values for these hyperparameters for the Chronos models are `num_samples=20`, `top_p=1.0`, `top_k=50`, and `temperature=1.0`. To isolate the impact of each hyperparameter, we varied them individually while keeping the others at their default values. The analysis was performed across all 24 datasets to observe the changes in RMSSE values. See the following charts for the impact of each hyperparameter on the Chronos-T5-Large model's performance: ![Hyperparameter Impact Analysis](https://github.com/readytensor/rt_forecasting_foundational_models/blob/main/outputs/chronos/chronos_hyperparameter_impacts.png?raw=true) **Key findings:** - **`num_samples`:** Results observed in the chart above suggests that increasing the `num_samples` enhances the model's accuracy. We observe the best RMSSE value at num_samples = 30. However, this improvement comes at the cost of increased GPU memory consumption (not displayed in chart). The physical limitation of GPU memory capped our experimentation at 30 samples (which consumed ~20GB VRAM). - **`top_k`:** The performance improvement with an increase in `top_k` values is evident, indicating a positive correlation between `top_k` and forecasting accuracy. However, a plateau effect is observed beyond `top_k` = 100, suggesting diminishing returns with further increases. - **`top_p`: amd `temperature`** These two hyperparameters showed mixed impacts on RMSSE values, without a clear directional trend. This suggests that the influence of `top_p` and `temperature` on model performance might be more complex, requiring further exploration to fully understand their effects. **Note on `temperature`:** During our investigation, we experimented with values of `temperature` higher than 1.0. However, these adjustments led to significantly worse performance outcomes, signaling that the model becomes more volatile with increased `temperature` values. This finding emphasizes the delicate balance required in tuning `temperature` to enhance model stability and accuracy. ### Moirai-Large Model We conducted an analysis to examine how changes in hyperparameters during inference affect the forecasting accuracy of the Moirai-Large model. This analysis focused on two key hyperparameters: `num_samples` and `context_length`. The default settings for these hyperparameters in Moirai models are `num_samples=100` and `context_length=1000`. To isolate the impact of each hyperparameter, we varied them individually while keeping the others at their default values. The analysis spanned all 24 datasets to observe variations in RMSSE values. See the following charts for the impact of each hyperparameter on the Moirai-Large model's performance: ![Hyperparameter Impact Analysis for Moirai-Large](https://github.com/readytensor/rt_forecasting_foundational_models/blob/main/outputs/moirai/moirai_hyperparameter_impacts.png?raw=true) **Key findings:** - **`num_samples`:** Moirai-Large model sensitivity to `num_samples` is evident at lower counts, with performance improving from an RMSSE of 0.88 at 10 samples to 0.83 at 20, stabilizing at 0.80 from 50 samples onward. Higher sample counts are advisable for optimal accuracy. - **`context_length`:** Moirai-Large's performance is also sensitive to `context_length`. RMSSE improves from 0.92 at 50 to 0.82 at 100, stabilizing at 0.80 for lengths of 500 and beyond. While the authors recommend a minimum of 1000 for most scenarios, dataset frequency should guide the optimal setting: shorter lengths may suffice for low frequency data, while high frequency data typically benefits from longer lengths. :::alert{type=important} **Note on `patch_size`:** `patch_size` is a critical third hyperparameter for the Moirai-Large model. We used the default 'auto' setting for our analysis. While attempting to explore different `patch_size` settings, our internal tests showed high variance and did not reveal clear trends, making the results inconclusive. The model authors suggest adjusting `patch_size` according to data frequency—opting for shorter patch sizes for lower frequency data and larger ones for higher frequency data. Given the sensitivity and complexity associated with this hyperparameter, we strongly recommend users to carefully calibrate `patch_size` based on the specific characteristics and frequency of their datasets to optimize forecasting accuracy. ::: ## Docker Image Sizes Across Forecasting Models In our study, all models are containerized using Docker to facilitate cross-platform deployment and reproducibility. The Docker images represents the model, its dependencies, and the necessary libraries for deployment. In this section, we review the image sizes across 31 models as a final consideration of model comparisons. The image sizes provide insights into the resource footprint of each model, which is crucial for deployment in resource-constrained environments. See the following chart for a comparison of Docker image sizes across the 31 forecasting models: ![Model Image Sizes](https://github.com/readytensor/rt_forecasting_foundational_models/blob/main/outputs/moirai/moirai_docker_image_sizes.png?raw=true) The following are the key takeaways from the Docker image size review: - **Chronos models' image sizes:** The Chronos models are notably the largest in image size, with sizes ranging from 15.96 GB for the Chronos-T5-Tiny to 18.76 GB for the Chronos-T5-Large model. This indicates a significant resource footprint, attributed to the comprehensive libraries and dependencies these advanced models necessitate. Note that these images include the pretrained model weights, ensuring everything required for making predictions is self-contained within the image. - **Moirai models' image sizes:** The Moirai models have smaller Docker images than the Chronos models, with sizes ranging from 11.42 GB for the Moirai Small, 12.11 GB for the Moirai Base, to 13.87 GB for the Moirai Large model. These sizes are smaller than those of the Chronos models but still larger than most other models, reflecting a moderate to high resource footprint. - **Comparison with other models:** The smallest image sizes belong to models like the Extra Trees and Random Forest Forecasting Models in Scikit-Learn, which are among the top performers in forecasting accuracy. These models demonstrate an efficient use of resources without compromising on performance. - **Naive models' image sizes:** Despite their simplicity, naive models using the Darts library do not yield the smallest Docker images. The choice of Darts, which supports a wide array of forecasting algorithms, introduces a considerable set of dependencies. This decision, prioritizing convenience and functionality, impacts the overall image size. The analysis of Docker image sizes underscores the operational considerations that accompany the adoption of advanced models like Chronos and Moirai. While these models offer superior forecasting accuracy even on unseen datasets, they demand substantial computational resources for deployment. ## Project Summary This comprehensive benchmarking project by Ready Tensor delivers critical insights into the performance, operational efficiency, and deployment considerations of 31 forecasting models, including the foundational Chronos and Moirai families. Our findings highlight the superior accuracy of Chronos models and the commendable performance of Moirai models, although both come with considerable resource demands, particularly in terms of GPU memory and Docker image sizes. Our sensitivity analysis underscores the necessity of carefully tuning inference hyperparameters for both Chronos and Moirai models to optimize their forecasting accuracy. Each model family responds differently to hyperparameter settings, emphasizing the importance of a tailored approach to maximize performance and efficiency. As we continue to explore the frontier of forecasting with foundational models, this project remains a work-in-progress. We are consistently incorporating new foundational models and updating our analyses to extend our understanding of these advanced tools in forecasting. This ongoing effort reflects Ready Tensor’s commitment to advancing the state of the art in AI forecasting, aiming to balance cutting-edge accuracy with practical deployment considerations.
pCgumBWFPD90
ready-tensor
cc-by
PEP8 Style Guide for Data Scientists and AI/ML Engineers
![pep8.svg](pep8.svg)--DIVIDER--tl;dr This tutorial will help you gain a solid understanding of the PEP8 style guide for writing clean, professional Python code.--DIVIDER--# Overview Welcome to the tutorial on writing PEP8 compliant Python code. PEP8 is the official style guide for Python, outlining best practices and conventions for formatting your code. Adhering to PEP8 recommendations can make your code more readable, maintainable, and consistent, fostering collaboration and easier code reviews. In this tutorial, we will cover the following topics: - **Introduction to PEP8**: Understand the significance of the PEP8 style guide and why it's considered the gold standard in Python code formatting. - **Key PEP8 Recommendations**: Dive into the main tenets of PEP8 such as naming conventions, indentation, line length, whitespace, and more. This section will provide insights into the conventions and their importance in writing clean code. - **Using Linters and Formatters to Enforce PEP8**: Learn about tools like flake8, pylint, and black that help in checking and ensuring that your code is PEP8 compliant. This section will be filled with hands-on examples, showcasing how these tools can be integrated into your coding workflow. - **Integrating PEP8 Checks into Development Workflow**: Discover the benefits of having automated PEP8 checks as part of your continuous integration process, ensuring code quality from the onset. - **Customizing PEP8 to Fit Team and Project Needs**: Understand how you can customize PEP8 rules to better align with your team's coding style and project requirements. - **Balancing PEP8 Recommendations with Practicality**: While PEP8 is a great guide, there are times when deviations are necessary. In this section, we'll discuss how to strike a balance between sticking to the style guide and ensuring code readability and efficiency in real-world scenarios. By the end of this tutorial, you will have a solid understanding of the PEP8 style guide and its recommendations for writing clean, professional Python code. You will also gain practical experience using linters and formatters to enforce PEP8 compliance in your projects, ensuring your code is consistently well-organized and easy to read. Let's get started! --DIVIDER--# Introduction to PEP8 Python, as a programming language, has gained widespread popularity among data scientists and machine learning engineers due to its simplicity and readability. PEP8, the official style guide for Python code, plays a significant role in maintaining this readability by providing a set of conventions that developers can follow when writing Python code. Adhering to these conventions ensures that the code is consistent, clean, and easy to understand, making it more maintainable and accessible for collaboration. PEP8 is particularly important for data scientists and ML engineers working in teams, as it helps create a standardized codebase that is easier for all team members to read and understand. A consistent coding style enables efficient collaboration, smooth communication, and reduces the likelihood of misunderstandings and errors, which are essential factors in delivering high-quality projects. PEP8 also helps developers avoid common pitfalls and mistakes, such as using ambiguous variable names or inconsistent indentation, which can lead to bugs and make code difficult to maintain. Let us now dive into the PEP8 style guide and explore its key recommendations for writing clean, professional Python code. --DIVIDER--# Key PEP8 recommendations In this section, we will explore the key recommendations of the PEP8 style guide, which covers various aspects of Python code, including naming conventions, indentation, line length, whitespace, and more.--DIVIDER--## Naming conventions The following are the key recommendations for naming conventions in Python code: - **Variable names**: Use lowercase letters and underscores to separate words in variable names. For example, `num_samples`, `learning_rate`, `model_name`, etc. - **Function names**: Use lowercase letters and underscores to separate words in function names. For example, `train_model`, `evaluate_model`, `get_data`, etc. - **Class names**: Use CamelCase to separate words in class names. For example, `DataLoader`, `Model`, `Trainer`, etc. - **Constants**: Use all uppercase letters and underscores to separate words in constant names. For example, `NUM_SAMPLES`, `LEARNING_RATE`, `MODEL_NAME`, etc. - **Private variables**: Use a single underscore prefix to indicate private variables. For example, `_num_samples`, `_learning_rate`, `_model_name`, etc. - **Private functions**: Use a single underscore prefix to indicate private functions. For example, `_train_model`, `_evaluate_model`, `_get_data`, etc. - **modules**: Use short, all-lowercase names for modules. Underscores can be used in the module name if it improves readability. `data_loader.py`, `model.py`, `trainer.py`, etc. - **packages**: Use short, all-lowercase names, although the use of underscores is discouraged. For example, `dataloader`, `model`, `trainer`, etc. - **exceptions**: Use CamelCase for exception names. For example, `ValueError`, `TypeError`, `ZeroDivisionError`, etc. - **arguments**: Use lowercase letters and underscores to separate words in argument names. For example, `num_samples`, `learning_rate`, `model_name`, etc. - **keyword arguments**: Use lowercase letters and underscores to separate words in keyword argument names. For example, `num_samples`, `learning_rate`, `model_name`, etc.--DIVIDER--## Indentation PEP8 style guide recommends the following for indentation in Python code: - Use 4 spaces per indentation level, not tabs - Align continuation lines with the opening delimiter, or use a hanging indent with 4-space indentation The following is an example of correct indentation: ```python # Correct: # Aligned with opening delimiter. foo = long_function_name(var_one, var_two, var_three, var_four) # Add 4 spaces (an extra level of indentation) to distinguish arguments from the rest. def long_function_name( var_one, var_two, var_three, var_four): print(var_one) # Hanging indents should add a level. foo = long_function_name( var_one, var_two, var_three, var_four) ``` The following is an example of wrong indentation: ```python # Wrong: # Arguments on first line forbidden when not using vertical alignment. foo = long_function_name(var_one, var_two, var_three, var_four) # Further indentation required as indentation is not distinguishable. def long_function_name( var_one, var_two, var_three, var_four): print(var_one) ```--DIVIDER--## Line length According to PEP8, the recommended maximum line length for Python code is 79 characters, including whitespace. This limit is designed to improve code readability by preventing lines from becoming excessively long and difficult to follow. Additionally, it ensures that the code can be easily viewed on various devices and screens without horizontal scrolling. When a statement is too long to fit within the 79-character limit, you can break it into multiple lines using parentheses, brackets, or braces, or by using the line continuation character (''). Make sure to follow the indentation guidelines discussed earlier for continuation lines. For comments and docstrings, PEP8 recommends a slightly shorter maximum line length of 72 characters. This allows for proper formatting when generating documentation or displaying the comments and docstrings in various contexts. --DIVIDER--## Whitespace Appropriate use of whitespace is vital for code readability, as it visually separates different elements and helps to convey the structure of the code. PEP8 provides several recommendations for using whitespace in Python code. Let us explore them in detail: #### Blank lines Blank lines play an essential role in visually separating different sections of code, making it easier to understand the code's structure and organization. - Top-level functions and class definitions: Use two blank lines to separate top-level functions and class definitions. This practice helps to distinguish between different sections of your code and improves overall readability. ```python class MyClass: # Class implementation def my_function(): # Function implementation class AnotherClass: # Class implementation ``` - Method definitions inside a class: Use one blank line to separate method definitions inside a class. This spacing helps to delineate the individual methods and their boundaries within the class. ```python class MyClass: def method_one(self): # Method implementation def method_two(self): # Method implementation def method_three(self): # Method implementation ``` - Grouping related sections of code: You can use blank lines to group related sections of code within a function or method. However, it is essential not to overuse blank lines, as too many can make your code appear disjointed and less coherent. ```python def my_function(): # Section 1: Data preprocessing # ... # Section 2: Model training # ... # Section 3: Model evaluation # ... ``` #### White space in expressions and statements - Use spaces around operators and after commas to improve readability. For example: ```python result = a + b * (c - d) my_list = [1, 2, 3, 4, 5] ``` - Do not use spaces around the "=" sign when used for keyword arguments or default parameter values: ```python def my_function(a, b, c=None, d=0): pass ``` - Place a single space before and after assignment operators, comparison operators, and boolean operators: ```python x = 5 y = x * 2 if x > 0 and y < 10: print("Within range") ``` - Avoid extraneous whitespace in the following situations: - Immediately inside parentheses, brackets, or braces: ```python # Correct my_list = [1, 2, 3] # Incorrect my_list = [ 1, 2, 3 ] ``` - Immediately before a comma, semicolon, or colon: ```python # Correct my_dict = {"key": "value", "another_key": "another_value"} # Incorrect my_dict = {"key" : "value" , "another_key" : "another_value"} ``` - Immediately before the open parenthesis that starts the argument list of a function call: ```python # Correct result = my_function(arg1, arg2) # Incorrect result = my_function (arg1, arg2) ``` - Immediately before the open bracket that starts an indexing or slicing operation: ```python # Correct my_value = my_list[3] # Incorrect my_value = my_list [3] ``--DIVIDER--## Imports The following are the key recommendations for imports in Python code: Imports: In this section, we will discuss the PEP8 recommendations regarding the organization and style of import statements in Python code. Properly organizing imports improves the code's readability and makes it easier to identify dependencies. #### Order of imports PEP8 recommends organizing imports into three distinct groups, separated by a blank line. The groups are as follows: 1. Standard library imports 2. Third-party library imports 3. Local application or library imports This organization helps to visually separate different types of imports and makes it clear where each imported module or package originates. ```python # Standard library imports import os import sys # Third-party library imports import numpy as np import pandas as pd # Local application/library imports import my_module import another_module ``` #### Import style PEP8 recommends using absolute imports rather than relative imports, as they are usually more readable and less prone to errors. Additionally, it is recommended to use the "import" statement to import an entire module or specific objects from a module, instead of using "from ... import \*", which can lead to unclear or conflicting names in the namespace. ```python # Recommended import my_module from my_module import my_function # Not recommended from my_module import * ``` #### Line length and multiple imports: When importing multiple objects from a single module, and the line length exceeds the recommended 79 characters, you can break the imports into multiple lines using parentheses and place one import per line. ```python from my_module import ( first_function, second_function, third_function, ) ``` #### Alphabetical order: To further improve the readability of your import statements, you can order them alphabetically within each import group. This practice makes it easier to locate specific imports when scanning the code. ```python # Standard library imports import os import sys # Third-party library imports import matplotlib.pyplot as plt import numpy as np import pandas as pd # Local application/library imports import my_module import another_module ```--DIVIDER--## Docstrings and Comments Let's review the PEP8 recommendations for docstrings and comments in Python code. #### Docstrings Docstrings are multi-line strings used to provide documentation for modules, classes, functions, and methods. They are enclosed in triple quotes (either single or double) and should be placed immediately after the definition of the entity they document. PEP8 recommends following the "docstring conventions" laid out in PEP 257. Some key points from PEP 257 include: - For a one-line docstring, keep the summary concise and on the same line as the opening triple quotes, followed by the closing triple quotes. ```python def my_function(): """This is a concise one-line docstring.""" # Function implementation ``` - For a multi-line docstring, start with a one-line summary, followed by a blank line, and then a more detailed description. The closing triple quotes should be placed on a new line. ```python def my_function(): """ This is a summary of the function's purpose. This section provides a more detailed description of the function, its arguments, return values, and any exceptions it may raise. The description can span multiple lines, adhering to the recommended 72-character limit for docstrings. """ # Function implementation ``` #### Comments Comments are an essential tool for explaining the purpose, logic, or implementation details of your code. PEP8 provides several recommendations for writing and formatting comments to maximize their usefulness and readability: - Use inline comments sparingly and ensure they are separated by at least two spaces from the code statement. Start the comment with a '#' followed by a single space. ```python x = x + 1 # Increment the value of x ``` - Keep comments up-to-date, as outdated comments can be more confusing than helpful. - Use complete sentences when writing comments, and ensure they are clear, concise, and relevant to the code they describe. - For block comments, which describe a section of code, place them before the code they describe and align them with the code. Start each line with a '#' followed by a single space. ```python # The following section of code calculates the sum # of all elements in the list and stores the result # in the variable 'total_sum' total_sum = 0 for element in my_list: total_sum += element ``` --DIVIDER--# Linters and Formatters to Enforce PEP8 Linters and formatters are useful to check and enforce PEP8 compliance in your Python code. Linters analyze your code for potential errors, bugs, and non-compliant coding practices, while formatters automatically adjust your code's formatting to adhere to PEP8 guidelines.--DIVIDER--### Linters There are several popular linters available for checking PEP8 compliance in Python code. Two widely-used linters are: - Flake8: Flake8 is a popular linter that combines the functionality of PyFlakes, pycodestyle, and McCabe complexity checking. It is easy to configure and can be integrated with various text editors and IDEs. To install and use Flake8, run the following commands: ``` pip install flake8 flake8 your_script.py ``` - Pylint: Pylint is another powerful linter that goes beyond PEP8 compliance checks and provides additional insights into code quality, potential bugs, and refactoring opportunities. To install and use Pylint, run the following commands: ``` pip install pylint pylint your_script.py ``` Both linters can be customized to fit your team's preferences and project requirements by modifying their configuration files.--DIVIDER--### Formatters Formatters are tools that automatically adjust your code's formatting to adhere to PEP8 guidelines. Two popular formatters are: - Black: Black is an opinionated code formatter that prioritizes consistency and readability. With minimal configuration options, Black enforces a uniform coding style across your project. To install and use Black, run the following commands: ``` pip install black black your_script.py ``` - Autopep8: Autopep8 is a formatter that focuses specifically on PEP8 compliance. It provides more configuration options than Black, allowing for greater customization. To install and use Autopep8, run the following commands: ``` pip install autopep8 autopep8 --in-place --aggressive --aggressive your_script.py ``` By using linters and formatters, you can ensure that your Python code adheres to PEP8 guidelines, improving its readability and maintainability. In the upcoming sections, we will discuss integrating PEP8 checks into your development workflow and continuous integration (CI) pipeline, which will help you maintain a consistent coding style throughout your project. --DIVIDER--# Integrating PEP8 Checks into Development Workflow In this section, we will discuss how to integrate PEP8 checks into your development workflow to maintain a consistent coding style and catch issues early in the development process. Integrating PEP8 checks into your workflow will help you and your team ensure that your Python code remains readable and maintainable. --DIVIDER--### Text editor and IDE integrations Many text editors and IDEs support PEP8 compliance checking, either natively or through plugins. Integrating PEP8 checks into your preferred text editor or IDE allows you to see and fix issues as you write code. Some popular text editors and IDEs with PEP8 support include: - Visual Studio Code: You can use extensions like "Python" by Microsoft or "Pylance" to enable PEP8 checking and formatting. - PyCharm: PyCharm has built-in PEP8 compliance checking and automatic formatting support. - Sublime Text: Install the "SublimeLinter" and "SublimeLinter-flake8" packages to enable PEP8 checking. #### Pre-commit hooks Pre-commit hooks are scripts that run automatically before each commit, allowing you to check for PEP8 compliance and other issues before your changes are committed to the repository. You can use the "pre-commit" framework to manage pre-commit hooks for PEP8 compliance checking and automatic formatting. To set up pre-commit hooks, follow these steps: - Install the pre-commit package: ``` pip install pre-commit ``` - Create a `.pre-commit-config.yaml` file in your project's root directory with the following content: ```yaml repos: - repo: https://github.com/ambv/black rev: stable hooks: - id: black language_version: python3.7 - repo: https://gitlab.com/pycqa/flake8 rev: 3.9.2 hooks: - id: flake8 ``` - Run `pre-commit install` to set up the pre-commit hooks. Now, every time you commit changes to your repository, the pre-commit hooks will check for PEP8 compliance and format your code automatically. #### Continuous Integration (CI) Pipeline Integrating PEP8 checks into your CI pipeline ensures that any code changes submitted by you or your team members meet the required coding standards before they are merged into the main branch. Popular CI services like GitHub Actions, GitLab CI/CD, and Jenkins can be configured to run PEP8 checks on each pull request or merge request. This setup will help you maintain consistent code quality across your project. By integrating PEP8 checks into your development workflow, you can ensure that your Python code remains readable, maintainable, and adheres to a consistent coding style. This practice will help you and your team catch issues early, streamline collaboration, and improve the overall quality of your project.--DIVIDER--# Customizing PEP8 to Fit Team and Project Needs In real-world projects, it's often necessary to adapt PEP8 rules to meet the specific needs of your team and project. By customizing the configuration of linters and formatters, you can enforce a coding style that aligns with your team's preferences and project requirements.--DIVIDER--#### Customizing linter configuration Both Flake8 and Pylint allow you to customize their configurations to enforce your preferred coding style. To do this, you can create a configuration file in your project's root directory. - For Flake8, create a `.flake8` file with the following example content: ``` [flake8] max-line-length = 100 ignore = E203, W503 ``` In this example, we've set the maximum line length to 100 characters and have chosen to ignore specific PEP8 rules (E203 and W503). - For Pylint, create a `pylintrc` file with the following example content: ``` [MASTER] max-line-length = 100 [MESSAGES CONTROL] disable = C0301 ``` Similar to the Flake8 configuration, we've set the maximum line length to 100 characters and disabled rule C0301, which corresponds to the line length rule. #### Customizing formatter configuration Both Black and Autopep8 allow you to customize their configurations to format your code according to your preferred style. - For Black, you can create a `pyproject.toml` file in your project's root directory with the following example content: ``` [tool.black] line-length = 100 ``` In this example, we've set the maximum line length to 100 characters. - For Autopep8, you can pass command-line arguments to customize its behavior, as shown in this example: ``` autopep8 --in-place --aggressive --aggressive --max-line-length 100 your_script.py ``` Here, we've set the maximum line length to 100 characters. --DIVIDER--# Balancing PEP8 with Practicality While adhering to the PEP8 style guide is important for maintaining consistent, readable, and maintainable Python code, it's also crucial to balance the strict application of PEP8 rules with practicality and readability in real-world projects. In this section, we will discuss some guidelines for striking this balance. 1. Prioritize readability over strict adherence: Although PEP8 provides a great set of guidelines for writing readable code, sometimes strict adherence to these rules can actually make the code less readable. In such cases, it's important to prioritize readability over strict PEP8 compliance. For example, you might break the line length limit if it improves readability or if breaking the line would make the code more difficult to understand. 2. Adapt PEP8 rules to your team's preferences and project requirements: Different teams and projects may have unique requirements and preferences when it comes to coding style. Instead of blindly following PEP8 rules, it's essential to adapt them to fit your team's needs. You can customize the configuration of linters and formatters to enforce a coding style that aligns with your team's preferences and project requirements. For example, you might choose a different maximum line length or modify the rules for naming conventions. 3. Use comments and docstrings effectively: While PEP8 provides guidelines for the formatting of comments and docstrings, it's also important to focus on their content. Write clear, concise, and informative comments and docstrings that explain the purpose and functionality of your code. This practice will make your code more understandable and maintainable for your team members and future contributors. 4. Use common sense When in doubt, use common sense and communicate with your team members to determine the best course of action. Discuss any changes or deviations from PEP8 rules with your team to ensure everyone is on the same page and understands the reasoning behind the decision. Also, be open to feedback from your team members and be willing to revise your code to enhance its readability and maintainability.--DIVIDER--# Summary In this tutorial, we introduced the PEP8 style guide and discussed its importance for maintaining consistent, readable, and maintainable Python code. We covered key PEP8 recommendations, such as naming conventions, indentation, line length, whitespace, imports, and more. We also discussed using linters and formatters, such as Flake8, Pylint, Black, and Autopep8, to check and enforce PEP8 compliance. Furthermore, we explored integrating PEP8 checks into development workflows, striking a balance between PEP8 recommendations and practicality, and customizing PEP8 rules to fit your team's preferences and project requirements. By following these guidelines, you can ensure that your Python code remains readable and maintainable, ultimately resulting in better collaboration and higher-quality projects.
qWBpwY20fqSz
ready-tensor
cc-by-sa
Licenses for ML Projects: A Primer
![licenses.png](licenses.png)--DIVIDER--TL;DR: This article explains the importance of licensing in ML projects, explores common license types, guides you in choosing the right license, and provides best practices for licensing your work. Understanding licensing is crucial for protecting your work and fostering collaboration in the ML community. --DIVIDER--## Article Overview In this article, we'll cover: 1. Introduction to Licenses in ML 2. Key Licensing Terms 3. Common License Types (MIT, Apache, GPL, etc.) 4. How to Choose the Right License 5. Licensing in Open Source ML Projects 6. Dual Licensing Explained 7. Applying a License to Your Project 8. Best Practices for ML Project Licensing We'll also provide an appendix with license templates and FAQs for quick reference. By the end of this article, you'll understand how to protect your ML projects while promoting innovation and collaboration in the community. Let's explore the world of ML licensing!--DIVIDER--:::warning{title="Caution"} ## Disclaimer This article provides general information about software licenses as they pertain to machine learning projects. The information contained in this article is intended for informational purposes only, and should not be construed as legal advice. While we strive to provide accurate general information, the information presented here is not a substitute for any kind of professional advice, and you should not rely solely on this information. Always consult a professional in the area for your particular needs and circumstances prior to making any professional, legal, or financial decisions. :::--DIVIDER--## Introduction to Licenses In the realm of digital technology, the term 'license' might seem overly formal or legalistic, especially when your primary focus is on algorithms and datasets. However, licenses play a crucial role in how the resources we create and use can be shared, modified, and deployed. A **license** is a legal instrument—usually a document—that outlines how a piece of work can be used by others. When you create a machine learning project or any software, you automatically hold the copyright to that work. By applying a license, you can permit others to use, modify, or distribute your work under specified conditions, all without relinquishing your copyright. Why should machine learning practitioners care about licenses? It's simple: they offer a degree of protection while encouraging collaboration and innovation. Without a license, your work defaults to 'all rights reserved', preventing others from using, modifying, or sharing it. This isn't ideal for the machine learning community, which thrives on open-source projects, collaboration, and shared knowledge. By attaching a license to your machine learning project, you provide explicit permission for others to use your work under certain conditions. This facilitates the sharing, adaptation, and even commercial use of your projects. Furthermore, a clear license can protect you from legal complications and misuse of your work. Understanding different types of licenses and their implications is essential. Some licenses, like the MIT License, permit anyone to use your work as long as you are credited, while others, like the GNU General Public License, place certain restrictions on the use or sharing of your work. By the end of this article, you'll have a solid understanding of these licenses, enabling you to choose one that fits your needs and intentions for your ML projects.--DIVIDER--## Glossary of Terms Before we delve into the different types of licenses, let's define some common terms used in discussions about software licenses: - **Source Code**: The human-readable version of a software program, typically written in a programming language.<br><br> - **Binary Code**: The machine-readable version of a software program, which computers execute directly.<br><br> - **Open-Source Software (OSS)**: Software available for use, modification, and distribution, typically under licenses that comply with the [Open Source Definition](https://opensource.org/osd).<br><br> - **Proprietary Software**: Software owned and controlled by an individual or company, restricting its use, modification, and distribution.<br><br> - **Freeware**: Software available at no monetary cost, but its source code might not be available for modification or distribution.<br><br> - **Shareware**: Software distributed free initially but may require payment for full functionality after a trial period.<br><br> - **Public Domain**: Works free for use by anyone without copyright restrictions.<br><br> - **Permissive Licenses**: Licenses (e.g., BSD, Apache) imposing minimal restrictions on software use, modification, and distribution.<br><br> - **Copyleft Licenses**: Licenses allowing derivative works but requiring them to adopt the same license as the original.<br><br> - **Derivative Work**: A work based on one or more pre-existing works, such as modifications or enhancements to original software.<br><br> - **Distribution**: Delivering software to others, whether via direct download, physical media, or other methods.<br><br> - **End-User License Agreement (EULA)**: A contract between the software author and user, outlining usage terms and restrictions.<br><br> - **Dual Licensing**: Offering software under two different licenses, typically one open-source and one proprietary.<br><br> - **Software Repository**: A storage location for software packages, often used in open-source contexts.<br><br> - **Contributor**: An individual or entity providing code or improvements to a software project.<br><br> - **Patent Rights**: Exclusive rights granted to inventors for their inventions. Some licenses grant users patent rights associated with the software, protecting them from infringement claims. With these important terms defined, let's explore the different types of licenses. --DIVIDER--## Understanding License Types When it comes to licensing your machine learning projects, there are a plethora of options available, each with its own set of rules and restrictions. While it's not feasible to cover all license types, we'll focus on some of the most commonly used licenses in the machine learning and broader software development communities. ### MIT License The [MIT License](https://opensource.org/licenses/MIT) is a permissive open-source license that's simple and straightforward. It allows users to do whatever they want with your work (including commercial use and modification) as long as they provide attribution back to you and don't hold you liable. ### Apache License 2.0 The [Apache License 2.0](https://opensource.org/licenses/Apache-2.0) is similar to the MIT License in its permissions but includes a built-in grant of patent rights from contributors to users, offering a degree of legal protection against patent claims. ### GNU General Public License (GPL) The [GPL](https://opensource.org/licenses/GPL-3.0) is a "strong" copyleft license. This means: - If a project incorporates or links to GPL-licensed software/code in a manner that creates a derived work, it must be distributed under the GPL when shared with others. - Modifications to GPL code, when distributed, must also be released under the GPL. :::info{title="Note"} ### Understanding "Derived Work" A "derived work" refers to a new work that is based upon one or more pre-existing works. In the context of software and the GPL, it generally means a project that incorporates or is based on GPL-licensed code in such a way that it inherits the GPL's obligations. However, the exact definition of what constitutes a derived work can be legally complex and has been the subject of debates and varying interpretations. If unsure about whether your project constitutes a derived work, it's advisable to seek legal counsel. ::: ### GNU Lesser General Public License (LGPL) The [LGPL](https://opensource.org/licenses/lgpl-license) can be seen as a "lighter" version of the GPL, often chosen for software libraries. Its key features are: - It permits proprietary software to link to LGPL-licensed libraries without requiring the entire software to be open-sourced. - If modifications are made to an LGPL library, only the modifications (and not the whole proprietary software) need to be open-sourced under the LGPL. In essence, while both GPL and LGPL aim to promote open software, the LGPL provides greater flexibility for integration with proprietary software. ### BSD Licenses The [BSD Licenses](https://opensource.org/licenses/bsd-license.php) are a family of permissive free software licenses. Unlike the more restrictive GPL, they allow for: - Redistribution of the source code and binary forms, with or without modification. - Use in proprietary software without the need to disclose the proprietary code. The main requirement is that the BSD copyright notice is retained in redistributed code, ensuring credit to the original authors. Remember, choosing the right license depends on what you want others to be able to do with your work. Each license carries different implications for users of your project, whether it be for commercial use, open-source contributions, or private modifications. The key is to understand your goals for your project and how a license can help protect your interests and enable others to benefit from your work. In the next section, we'll consider what factors should be taken into account when choosing a license for your ML projects. ## Considerations for Choosing a License Choosing the right license for your machine learning project is a critical decision that requires careful thought. The choice of license directly influences how your project can be used, modified, and shared by others. Here are some important considerations to keep in mind: **Goals for Your Project** What do you hope to achieve with your project? Do you want it to be freely available for any use, or are you looking to monetize it? Do you want to encourage others to build upon your work, or would you rather maintain control over the modifications? Your answers to these questions will greatly influence the type of license you choose. **Community Norms** The norms of the community in which you're working can also influence your choice of license. Some communities favor certain licenses, and using a similar license can facilitate collaboration. **Compatibility with Other Licenses** If your work includes code or projects that are under other licenses, you need to consider license compatibility. Not all licenses are compatible with one another. For instance, a piece of software that is licensed under GPL cannot be included in a project that is licensed under a more permissive license, like MIT or Apache. **Commercial Use** You'll need to decide if you want to allow commercial use of your project. Some licenses, like the MIT and Apache licenses, allow unrestricted use, including commercial use, while others, like the copyleft GPL license, require any derived works to also be open-sourced, which may be undesirable for some commercial purposes. **Contributions and Modifications** If you're releasing an open-source project and hope to receive contributions from others, you'll need to think about how the license will affect potential contributors. More restrictive licenses might deter some contributors, while more permissive licenses might encourage contributions. Remember, there's no one-size-fits-all license. The best license for your ML project depends on your particular goals, the nature of your project, and the wider context in which your project will be used. In the next section, we'll discuss how licenses apply to open-source machine learning projects. --DIVIDER-- ## Licenses and Open Source ML Projects The concept of open-source is fundamental in the machine learning community. It enables a collaborative environment where researchers and practitioners can share their work and build upon others', accelerating innovation and learning. Licensing plays a pivotal role in this landscape, determining how these open-source projects can be used, shared, and modified. When releasing your machine learning projects as open-source, it's crucial to apply an appropriate license. Without a license, despite the source code being publicly available, others don't technically have the right to use, modify, or distribute the work. By adding a license, you explicitly grant these permissions. The choice of license also impacts the kind of contributions you can receive. For instance, permissive licenses like MIT or Apache 2.0 are often used in open-source ML projects to encourage contributions, as they allow others to freely use, modify, and distribute the work, including in proprietary software. On the other hand, copyleft licenses like GPL ensure that derivatives of your work also remain open-source, fostering an environment of open collaboration but potentially limiting the use of your work in proprietary software. Furthermore, consider that your open-source ML project may be used in combination with other projects or software. The compatibility of licenses becomes crucial in this context, as conflicts could legally prevent usage of your project. In summary, the licensing of your open-source machine learning project has a profound impact on its use, distribution, and potential for collaboration. As such, understanding the implications of different licenses is crucial when contributing to the open-source machine learning community. In the next section, we will explore the concept of dual licensing and its implications for machine learning projects. --DIVIDER-- ## Dual Licensing Dual licensing is a strategy wherein the owner of a software offers the software under two different licenses. One of these licenses is typically an open-source license that might have certain restrictions, and the other is typically a commercial or proprietary license that allows uses not permitted by the open-source license. Why would someone choose to dual license their machine learning project? The reasons can vary, but one common rationale is to allow the project to be freely used and modified in open-source projects, while also offering a paid license for commercial use that provides additional benefits, like the ability to keep modifications private or to get support services. Here's an example of how dual licensing might work: 1. You develop a machine learning project and you want to contribute to the open-source community, so you release the project under the GPL, which requires any modifications to also be open-source. 2. However, a company wants to use your project in a proprietary software product and they do not want to open-source their modifications. To accommodate this use case, you offer a commercial license that allows for private modifications in exchange for a fee. Remember, dual licensing can add complexity to your licensing strategy and may require you to manage different obligations for different users. Additionally, dual licensing only makes sense if you hold all the rights to the software or project; if your work is based on someone else's GPL-licensed work, for instance, you won't be able to offer a proprietary license. In the following section, we'll guide you through the practical process of applying a license to your machine learning project.--DIVIDER--## How to Apply a License to Your ML Project Applying a license to your machine learning project doesn't have to be a complex process. In essence, it involves including a license file in your project and, if necessary, adding license headers to your source files. Here are the general steps: **Choose a License** First, based on the considerations we've discussed, choose a license that aligns with your goals for your project. The [Open Source Initiative](https://opensource.org/licenses) provides a comprehensive list of open source licenses you can choose from. Websites like [Choose a License](https://choosealicense.com/) or [TL;DR Legal](https://tldrlegal.com/) can be handy resources to understand licenses in simple terms. **Add a LICENSE File** Once you've chosen a license, create a file in the root of your project repository named `LICENSE` (or `LICENSE.txt`). Into this file, you should put the full text of the chosen license. The text can usually be obtained from the license's official website or a trusted source like the Open Source Initiative. For licenses like the MIT and Apache 2.0 licenses, there's usually a line in the license text where you would insert your name (or your organization's name) as the copyright holder and the year. Be sure to replace these placeholders with the appropriate information. **Add License Headers (Optional)** For some licenses, particularly those that require sharing changes under the same license (like the GPL), it's recommended to add a short license header to the top of each source file in your project. This header usually includes the name of the license, the year, and the copyright holder's name. Here's an example for the GPL: ```python # Copyright (C) [year] [name of author or organization] # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. ``` **Announce Your License** Lastly, it's good practice to mention the license in your `README.md` file and in any public-facing documentation, so that it's clear to all users what license your project is under. Remember that while this process is relatively straightforward, it's important to choose your license carefully and to apply it correctly to ensure that your intentions for your project are clear. If you have any doubts or concerns, consider consulting with a legal expert. In the next section, we'll share some best practices for licensing machine learning projects. --DIVIDER--## Best Practices in Licensing ML Pprojects As we've seen, licensing is an important aspect of managing and sharing machine learning projects. As we close this article, here are some best practices to consider: 1. **Ensure Clarity**: Be sure to clearly communicate the licensing of your machine learning project. Include a `LICENSE` file in the root directory of your project, and mention the license in your `README.md` file.<br><br> 2. **Honor Existing Licenses**: If your project uses or builds upon others' work, ensure that you respect the terms of those licenses. Consult with a legal expert if you're unsure.<br><br> 3. **Align with Community Norms**: Consider the norms of your community when choosing a license. Aligning with commonly used licenses in your community can facilitate collaboration and compatibility with other projects.<br><br> 4. **Mind Compatibility**: If your project is intended to be used with other projects or software, consider how your chosen license interacts with the licenses of those projects. Legal conflicts arising from license incompatibilities can be problematic.<br><br> 5. **Review Your License Choice**: As your project evolves, your original licensing choice might no longer serve your goals. It's good practice to revisit your licensing strategy as your project grows and changes.<br><br> 6. **Consider Dual Licensing**: If you're looking to both contribute to the open-source community and monetize your project, consider dual licensing. This allows you to offer your project under both an open-source and a commercial license.<br><br> 7. **Seek Legal Advice When Needed**: Licensing involves legal decisions. If you're ever unsure about your licensing choices or obligations, it's best to consult with a legal expert. These practices can help you ensure that your intentions for your machine learning project are clear, you're respectful of others' work, and your project can be used, modified, and shared in the ways you intend.--DIVIDER--## Summary In this article, we've explored the role of licenses for machine learning projects, covering key terms, license types, and considerations for choosing a license. We discussed open-source and dual licensing strategies and provided a guide on how to apply a license to your ML project. Key best practices were highlighted, including clarity in licensing, respect for other licenses, and the need for license compatibility. As a final reminder, always consult with a legal expert if you're unsure about any licensing matters. -----DIVIDER--## References 1. [Choose a License](https://choosealicense.com/) - An open-source guide maintained by GitHub, which helps you understand different licenses and choose the right one for your project.<br><br> 2. [Open Source Initiative](https://opensource.org/licenses) - A comprehensive resource on different open-source licenses maintained by the Open Source Initiative.<br><br> 3. [Free Software Foundation](https://www.fsf.org/licensing/) - The Free Software Foundation's guide on different free software licenses.<br><br> 4. [Creative Commons](https://creativecommons.org/) - An organization that provides free, easy-to-use copyright licenses that provide a simple, standardized way to give the public permission to share and use your creative work.<br><br> 5. [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0) - The full text and explanation of the Apache License 2.0.<br><br> 6. [GNU Licenses](https://www.gnu.org/licenses/licenses.html) - The different licenses provided by the GNU Project.<br><br> 7. [MIT License](https://opensource.org/licenses/MIT) - The full text and explanation of the MIT license.<br><br> 8. [Dual Licensing](https://en.wikipedia.org/wiki/Multi-licensing) - An explanation of dual licensing or multi-licensing on Wikipedia.<br><br> 9. [Open Source Definition](https://opensource.org/osd) - The Open Source Initiative's definition of open source. --- --DIVIDER-- ## Appendix ### License Templates Licensing is an intricate field, and the exact wording of a license can significantly influence its implications. To aid in your understanding and to provide a quick resource for your projects, we've compiled the full templates for some of the most widely-used licenses in the machine learning and open-source community. Feel free to explore each template and select one that aligns with your project's goals: - [MIT License Template](https://opensource.org/license/mit/) - [Apache 2.0 License Template](https://opensource.org/licenses/Apache-2.0) - [GNU General Public License (GPL) Template](https://www.gnu.org/licenses/gpl-3.0.html#license-text) - [GNU Lesser General Public License (LGPL) Template](https://www.gnu.org/licenses/lgpl-3.0.html#license-text) - [BSD 2-Clause License Template](https://opensource.org/licenses/BSD-2-Clause)--DIVIDER--### FAQ **Q: Does open-source mean free?** A: Open-source refers to the accessibility of the source code, not the cost of the software. Open-source software is generally free to use and modify, and can often be distributed under the terms of the specific license. However, the exact permissions and restrictions can vary depending on the license. Some open-source licenses allow the software to be incorporated into commercial products which can be sold. **Q: If you don't include a license in your project, what happens?** A: Without a license, the default copyright laws typically apply, which means you retain all rights and others are not legally permitted to use, modify, or distribute your project. However, laws can vary by country, so it's always best to specify a license to make your intentions clear. **Q: What if I want to use a project that doesn't specify a license?** A: It's generally recommended not to use, modify, or distribute a project that doesn't specify a license, as this implies that the creator retains all rights and hasn't granted any explicit permission to others to use their work. It's always best to reach out to the creator and ask for clarification. **Q: Can you change the license of a project after it has been released?** A: Yes, but it can be complicated. If you are the sole contributor to the project, you can change the license at any time. However, if your project has contributions from others, you will need their permissions to change the license. Also, users who received the project under the original license can continue to use that version under its original terms. **Q: Can you take someone's project and release it under a different license?** A: Generally no, unless the original license allows it or you have explicit permission from the copyright holder. Always check the terms of the original license. **Q: Can you take someone's project, modify it, and release it under the same license?** A: Most open-source licenses allow this, but you should always check the specific terms of the license. **Q: Can you take someone's project, modify it, and release it under a different license?** A: It depends on the terms of the original license. While you can generally modify someone's project, releasing that modification under a different license is often restricted. Some licenses may allow it, while others may not. If a new license is applied to a derivative work, it's often required or at least good practice to acknowledge the original work and its license. Always check the terms of the original license. **Q: Can you use someone's project in a commercial application?** A: It depends on the license of the project. Some licenses, like the Apache 2.0 or MIT licenses, allow for commercial use. Others, like the AGPL, have conditions that can complicate commercial use. Always check the license terms. **Q: How does licensing work when combining code with different licenses?** A: When combining code with different licenses, it's important to consider license compatibility. Some licenses, like MIT and BSD, are permissive and have few restrictions, which makes them broadly compatible with other licenses. Others, like GPL, have stronger restrictions and require that any derivative work also be licensed under GPL. Always review the terms of each license to ensure they are compatible. **Q: Can I use commonly used libraries such as scikit-learn, TensorFlow, or PyTorch in my project and release it under a license of my choice?** A: Yes, but your chosen license must be compatible with the licenses of the libraries. Both TensorFlow and PyTorch use the Apache 2.0 License, and scikit-learn uses a modified BSD license, which are all permissive licenses, meaning they have few restrictions on how you can use the libraries. **Q: Can I use commonly used libraries such as scikit-learn, TensorFlow, or PyTorch in my commercial application?** A: Yes, these libraries use permissive licenses (Apache 2.0 for TensorFlow and PyTorch, a modified BSD for scikit-learn) that allow for commercial use. **Q: Can I use commonly used R packages in my project and release it under a license of my choice?** A: Yes, but your chosen license must be compatible with the licenses of the packages. Many R packages are licensed under the GPL, which requires that derivative works (which could include projects that heavily use the package) are also licensed under the GPL. **Q: Can I use commonly used R packages in my commercial application?** A: It depends on the license of the packages. Many R packages are licensed under the GPL, which allows commercial use but has certain requirements if you distribute your application to others. Always check the license terms.
r95vGYcr1shK
ready-tensor
mit
Exploring Parameter-Efficient Fine-Tuning (PEFT)
![hero.jpg](hero.jpg)--DIVIDER--# TL;DR In this article, we explore Parameter-Efficient Fine-Tuning (PEFT) methods, including Full Fine-Tuning, LoRA (Low-Rank Adaptation), DoRA (Weight-Decomposed Low-Rank Adaptation), and QLoRA (Quantized LoRA). By training and testing models on the SST-2 (Stanford Sentiment Treebank) dataset, we compare these approaches in terms of accuracy, loss, memory savings, and computational efficiency. The results demonstrate how PEFT methods can significantly reduce the computational burden and memory requirements without compromising performance, making them ideal for large-scale language models.--DIVIDER--# Introduction As large language models continue to grow in size and complexity, the demand for efficient fine-tuning methods has increased dramatically. Traditional **full fine-tuning** approaches, which involve updating all model parameters, are resource-intensive and often impractical for large models due to memory and computational constraints. This challenge has led to the development of **Parameter-Efficient Fine-Tuning (PEFT)** methods, which allow for effective adaptation of pre-trained models while updating only a small fraction of the parameters. In this article, we dive into four popular fine-tuning approaches: 1. **Full Fine-Tuning**: The baseline approach where all model parameters are updated. 2. **LoRA (Low-Rank Adaptation)**: A method that introduces trainable low-rank matrices to the model's weight matrices, reducing the number of updated parameters. 3. **DoRA (Weight-Decomposed Low-Rank Adaptation)**: A further optimized variant of LoRA that decomposes model weights for enhanced efficiency. 4. **QLoRA (Quantized Low-Rank Adaptation)**: A quantized version of LoRA that leverages 4-bit quantization to further reduce memory usage while maintaining performance. We evaluate these techniques using the **SST-2 (Stanford Sentiment Treebank)** dataset, comparing their performance in terms of **accuracy**, **loss**, and **memory efficiency**. By the end of this article, readers will understand how PEFT methods can significantly reduce training costs while preserving or even improving model performance, making them an essential tool in the world of large-scale language models. --DIVIDER--# Full Fine-Tuning **Full Fine-Tuning** is the traditional approach to adapting a pre-trained model to a specific task. In this method, **all of the model's parameters are updated** during the fine-tuning process. This means that the pre-trained weights are not frozen; instead, they are adjusted to minimize the loss on the new dataset. <h4> How Full Fine-Tuning Works:</h4> In full fine-tuning, the model is first initialized with pre-trained weights, typically from a large dataset (such as a language model trained on a diverse corpus). During fine-tuning, each of the model's parameters is updated based on the gradients computed for the task-specific dataset. This involves running backpropagation through the entire model, computing and applying gradients to all layers. Since the model's entire parameter set is modified, full fine-tuning can lead to optimal performance for the target task, as the model has the flexibility to fully adapt to the new data. <h4>Pros:</h4> - **High flexibility** : The model can learn highly specific patterns for the new task, potentially leading to the best performance, especially when the task differs significantly from the pre-training objective. - **State-of-the-art results**: Full fine-tuning has been used in numerous applications to achieve leading results across a variety of NLP benchmarks. <h4>Cons:</h4> - **Memory and computational cost**: Fine-tuning all the parameters of a large model (e.g., models with billions of parameters) requires a significant amount of GPU memory and computational power, making it impractical for many users without access to specialized hardware. - **Overfitting risk**: If the new dataset is small or very specific, full fine-tuning can lead to overfitting, as the model may overly adjust to the fine-tuning dataset, losing some of the benefits from pre-training. While full fine-tuning remains the baseline approach, it often becomes impractical as model sizes increase. This has motivated the development of **parameter-efficient fine-tuning (PEFT)** methods, which aim to reduce the number of parameters updated during fine-tuning, lowering computational requirements while still achieving high performance. --DIVIDER--# Overview of PEFT Approaches **Parameter-Efficient Fine-Tuning (PEFT)** methods offer a practical solution to the high computational and memory demands of full fine-tuning by updating only a fraction of the model’s parameters. These methods are particularly useful for fine-tuning large pre-trained models with limited hardware resources. In this section, we briefly introduce three key PEFT approaches: **LoRA**, **DoRA**, and **QLoRA**. <h2> 1. LoRA (Low-Rank Adaptation)</h2> **LoRA** introduces trainable, low-rank matrices to the model’s weight matrices while freezing the core parameters. By only training the smaller, low-rank matrices, LoRA reduces the number of trainable parameters, making it highly efficient for large models. This technique has gained popularity for its ability to fine-tune models with significantly lower memory and computational requirements compared to full fine-tuning. <h2>2. DoRA (Weight-Decomposed Low-Rank Adaptation)</h2> **DoRA** builds on the principles of LoRA but takes parameter efficiency further by decomposing weight matrices into low-rank components. This decomposition allows for even fewer trainable parameters while maintaining model performance. DoRA is designed for scenarios where memory efficiency is paramount, and fine-tuning needs to be performed with even less overhead. <h2>3. QLoRA (Quantized Low-Rank Adaptation)</h2> **QLoRA** is a highly efficient fine-tuning method that combines the low-rank adaptation of LoRA with **quantization**, reducing the precision of the frozen weights (e.g., from 32-bit to 4-bit). This results in a dramatic reduction in memory usage while maintaining performance. QLoRA is particularly effective for fine-tuning large models on smaller hardware, making it one of the most memory-efficient methods available today. --DIVIDER--# LoRA **LoRA** is a popular **Parameter-Efficient Fine-Tuning (PEFT)** method designed to reduce the computational and memory overhead associated with fine-tuning large pre-trained models. Instead of updating the entire set of model parameters, LoRA introduces small, trainable **low-rank matrices** to specific weight matrices, while freezing the original model weights. This allows for efficient fine-tuning without sacrificing much performance. The main idea behind LoRA is that instead of fine-tuning all parameters, we assume that the weight updates during fine-tuning can be expressed as a low-rank matrix. This significantly reduces the number of parameters to train, leading to memory and time savings. LoRA is especially effective in large models where the number of parameters is massive, making traditional fine-tuning impractical. LoRA has been widely adopted for tasks that require fine-tuning massive models, particularly in natural language processing (NLP) applications. It has been proven to retain a high level of performance, even when only a small fraction of the parameters are updated. <h2> Advantages of LoRA:</h2> - **Reduced Memory Usage**: By only training the low-rank matrices, LoRA drastically reduces the memory footprint required for fine-tuning. - **Computational Efficiency**: LoRA reduces the number of parameters to be updated, leading to faster training times. - **Scalability**: LoRA can be applied to extremely large models, making it feasible to fine-tune models that were previously too large to handle with limited hardware. LoRA is a powerful tool in the fine-tuning toolbox, and it serves as the foundation for more advanced PEFT methods such as DoRA and QLoRA. --DIVIDER--<h2> Technical Implementation</h2> In this section, we will walk through the **technical implementation** of the **LoRA** method applied to **GPT-2 Large**, specifically targeting the attention layers. The following implementation freezes the majority of the model’s parameters and applies **low-rank adaptation matrices** to the attention layers, which are of type **`Conv1D`** in GPT-2. We start by defining the LoRA layer and then recursively applying it to the relevant parts of the model. ![lora-diagram.png](lora-diagram.png) <h2> LoRA Class </h2> The first step is to create a custom `LoRA` class that decomposes the weight matrices into two smaller matrices **A** and **B**. The key is to modify the model by inserting these small, trainable matrices, which are later used during the fine-tuning process. ```python class LoRA(nn.Module): def __init__(self, original_layer, alpha, rank=8): super(LoRA, self).__init__() # Store the original layer's weight self.original_weight = original_layer.weight self.alpha = alpha self.rank = rank in_features = original_layer.weight.shape[0] out_features = original_layer.weight.shape[1] # Standard deviation for initialization std_dev = 1 / torch.sqrt(torch.tensor(rank).float()) # Perform weight decomposition into two low-rank matrices A and B # We initialize A and B with random values self.A = nn.Parameter(torch.randn(in_features, rank) * std_dev) self.B = nn.Parameter(torch.zeros(rank, out_features)) # Freeze the original weight (it won't be updated) self.original_weight.requires_grad = False def forward(self, x): # Approximate the original weight as the product of A and B low_rank_weight = self.alpha * torch.matmul(self.A, self.B) adapted_weight = self.original_weight + low_rank_weight # Apply the adapted weight to the input return torch.matmul(x, adapted_weight) ``` <h3>Low-Rank Approximation</h3> Instead of training the full weight matrix, we introduce two smaller matrices, **A** and **B**, to approximate the weight updates in a low-rank form. The **rank** of these matrices controls their dimensions: **A** has dimensions $$ (in\_features, rank) $$ **B** has dimensions $$ (rank, out\_features) $$ where $$in\_features$$ and $$out\_features$$ correspond to the original weight matrix dimensions. Multiplying **A** and **B** gives a matrix with the same shape as the original weight matrix $$ (in\_features, out\_features) $$ This allows us to efficiently learn an approximation of the weight updates without training the entire matrix. Importantly, you can change the **rank** while maintaining the same output dimension. A **higher rank** captures more information and typically leads to **better performance**, but it also increases the **computational cost** and training time. Conversely, a **lower rank** reduces the memory and computational requirements but may lead to a loss in accuracy.<br><br> :::info{title="Info"} Suppose you have the original weight matrix of size (1000x1000). This means that you have a million parameters in the original layer. If we approximate the matrix by decomposing it into two matrices of shape (1000, 8) and (8, 1000), you would only have 16000 trainable parameters. If you then multiply the two matrices, you get the original dimensions back. This way we approximated a million parameters using only 16000 parameters. In this case the rank is 8. ::: <h3>Frozen Parameters</h3> The original model’s weight parameters are frozen (`requires_grad = False`), meaning they are not updated during fine-tuning. This significantly reduces memory usage and computational complexity because the majority of the model’s parameters remain untouched during the fine-tuning process.<br><br> <h3>Forward Pass</h3> During the forward pass, the effective weight is computed as a combination of the frozen original weight matrix and the scaled product of the two low-rank matrices, A and B, where the alpha parameter controls the magnitude of this adaptation. This scaling helps balance the contribution of the low-rank update to the overall weight matrix. The adapted weight matrix is then applied to the input, allowing the model to leverage the learned low-rank adaptation for fine-tuning, while still retaining the pre-trained knowledge encoded in the frozen weights. <h2>Applying LoRA to GPT-2 Large</h2> Now that we have the `LoRA` class, we need to recursively apply it to the **attention layers** of the model, which are implemented as **`Conv1D`** layers in GPT-2. ```python from transformers.pytorch_utils import Conv1D def apply_peft_to_layer(module, alpha=4, rank=8, type='lora'): """ Recursively applies LoRA/DoRA to the appropriate layers in the model. Args: module: The current module to examine and possibly replace. alpha: Scaling factor for LoRA. rank: The rank of the low-rank adaptation. type: The type of PEFT to apply ('lora' or 'dora'). Returns: None (modifies the module in place). """ peft_module = LoRA if type == 'lora' else DoRA for name, child_module in module.named_children(): # We target the attention layers of GPT-2, which are Conv1D layers if isinstance(child_module, Conv1D) and 'c_attn' in name: # Replace the original attention layer with the LoRA-adapted layer setattr(module, name, peft_module(child_module, alpha=alpha, rank=rank)) # If the module has children, apply the function recursively if len(list(child_module.children())) > 0: apply_peft_to_layer(child_module, alpha, rank, type) ``` - **Recursive Application**: This function navigates through the model's architecture, searching for attention layers (e.g., `c_attn`) that are implemented as `Conv1D` layers.<br><br> - **Conditional Replacement**: Once an attention layer is found, we replace it with the **LoRA-adapted** layer using the `setattr()` function. The `LoRA` layer only affects the specific parts of the model where it is applied, leaving the rest of the model unchanged.<br><br> - **Recursive Search**: The function checks for child layers and applies LoRA to any matching layers it finds recursively, ensuring that all attention layers in the model are adapted.<br><br> <h2>Model Modification and Loading</h2> Finally, we define a function to load a pre-trained GPT-2 model and apply LoRA to its attention layers. ```python def get_custom_peft_model(alpha=4, rank=8, type='lora'): """ Load the model and apply LoRA/DoRA recursively to all applicable layers. Args: model_name: The name of the model to load. alpha: Scaling factor for LoRA. rank: Rank for low-rank adaptation in LoRA. Returns: The model with LoRA applied. """ # Load the GPT-2 model and set the pad token ID model = AutoModelForSequenceClassification.from_pretrained(model_name, ignore_mismatched_sizes=True).to(device) model.config.pad_token_id = tokenizer.pad_token_id # Freeze all model parameters except those in the LoRA layers for param in model.parameters(): param.requires_grad = False # Apply LoRA recursively to all relevant layers apply_peft_to_layer(model, alpha=alpha, rank=rank, type=type) return model ``` - **Loading the Model**: The `AutoModelForSequenceClassification` function loads a pre-trained **GPT-2 Large** model.<br><br> - **Freezing the Model**: Before applying LoRA, we freeze all of the model’s parameters to ensure that only the LoRA layers will be updated during fine-tuning.<br><br> - **Recursive LoRA Application**: We apply the `apply_peft_to_layer()` function to recursively insert LoRA into the attention layers. <h2> Targeting the GPT-2 Attention Layers</h2> In GPT-2, the attention mechanism is implemented using **`Conv1D`** layers in the transformer blocks. This code specifically targets the attention layers (`c_attn`) of GPT-2 Large, replacing them with LoRA-modified versions. This allows us to achieve fine-tuning by modifying only a fraction of the model's parameters while leveraging the pre-trained knowledge of the frozen layers. --DIVIDER--# DoRA **DoRA** is an extension of the **LoRA** method, offering even greater efficiency by applying a weight decomposition technique. Similar to LoRA, DoRA freezes the majority of the model's parameters and focuses on updating only small, trainable matrices. However, DoRA goes one step further by decomposing the weight matrices into two parts before applying low-rank adaptation, allowing for more granular control over the updates. <h3> Key Differences from LoRA</h3> • In LoRA, the entire weight update is approximated by the product of two low-rank matrices. In DoRA, the original weight matrix is first decomposed into two components: magnitude and direction. This decomposition separates the scaling factor (magnitude) from the orientation (direction) of the weight update, providing more control over the fine-tuning process and improving efficiency. • The decomposition into magnitude and direction allows for better adaptability in certain tasks, where a more detailed breakdown of the model’s weights can lead to higher performance with fewer trainable parameters. Specifically, DoRA computes unit vectors to represent the direction of weight updates, while applying scaling through a magnitude factor. The unit vector, which represents the direction, is computed by normalizing the low-rank matrix product. You can obtain the unit vector by diving the vector by its norm. $$ \mathbf{u} = \frac{\mathbf{A} \mathbf{B}}{\|\mathbf{A} \mathbf{B}\|} $$ where $$\mathbf{A}$$ and $$\mathbf{B}$$ are the low-rank matrices, and $$\mathbf{u}$$ is the unit vector representing the direction of the weight update. The norm of a vector $$\mathbf{X}$$ is given by: $$ \|\mathbf{X}\| = \sqrt{\sum_{i=1}^{n} x_i^2} $$ <h2> Technical Implementation</h2> The technical implementation of DoRA builds upon the LoRA framework, but adds an additional decomposition step to the weight matrices. ```python class DoRA(nn.Module): def __init__(self, original_layer, alpha, rank=8): super(DoRA, self).__init__() self.original_weight = original_layer.weight self.alpha = alpha self.rank = rank in_features = original_layer.weight.shape[0] out_features = original_layer.weight.shape[1] # Perform weight decomposition into two low-rank matrices A and B # We initialize A and B with random values std_dev = 1 / torch.sqrt(torch.tensor(rank).float()) self.A = nn.Parameter(torch.randn(in_features, rank) * std_dev) self.B = nn.Parameter(torch.zeros(rank, out_features)) self.m = nn.Parameter(torch.ones(1, out_features)) self.original_weight.requires_grad = False def forward(self, x): # Approximate the original weight as the product of A and B low_rank_weight = self.alpha * torch.matmul(self.A, self.B) low_rank_weight_norm = low_rank_weight / (low_rank_weight.norm(p=2, dim=1, keepdim=True) + 1e-9) # Add the original (frozen) weight back to the low-rank adaptation low_rank_weight = self.m * low_rank_weight_norm adapted_weight = self.original_weight + low_rank_weight # Apply the adapted weight to the input return torch.matmul(x, adapted_weight) ``` - **Decomposition Step**: An extra decomposition step is introduced with the `self.m` parameter, allowing the model to learn different **magnitudes** for the normalized weight updates. This provides more flexibility by decoupling the direction of the weight updates (captured by the low-rank matrices) from their magnitude, enabling finer control over the adaptation process. - **Forward Pass**: The adapted weight is still a combination of the frozen weight and the low-rank matrices, but with an additional scaling layer that offers more flexibility in weight updates. To summarize, the key distinction between LoRA and DoRA lies in DoRA's decoupling of the magnitude and direction of the weight updates. This is achieved through the normalization of the low-rank matrices: ```python low_rank_weight_norm = low_rank_weight / (low_rank_weight.norm(p=2, dim=1, keepdim=True) + 1e-9) low_rank_weight = self.m * low_rank_weight_norm ``` By normalizing the weight updates and then scaling them with a learnable magnitude parameter (`self.m`), DoRA allows for more refined control over both the direction and magnitude of the weight updates, enhancing the model’s ability to adapt to specific tasks. --DIVIDER--# QLoRA **QLoRA** builds on the foundation laid by **LoRA** and further improves efficiency by incorporating **quantization** techniques. By reducing the precision of the model’s frozen parameters through quantization while keeping the low-rank adaptation matrices in higher precision, QLoRA dramatically reduces the memory and computational requirements without significantly affecting model performance. The key idea behind QLoRA is to combine the low-rank adaptation from LoRA with **4-bit quantization** for the frozen parameters. This approach allows fine-tuning on large models even on hardware with limited memory resources, such as GPUs with smaller VRAM, by maintaining the core functionality of the model with fewer bits while still updating essential components with high precision. <h2> Key Features of QLoRA</h2> - **4-Bit Quantization**: QLoRA uses **4-bit quantization** for the frozen parameters of the model. This drastically reduces memory usage while retaining enough precision to preserve pre-trained knowledge. - **Higher-Precision Low-Rank Matrices**: The low-rank matrices (A and B) used for adaptation are kept in **FP16** or **FP32** precision, allowing QLoRA to achieve accurate fine-tuning results while reducing memory costs. <h2> Why QLoRA is Efficient</h2> By quantizing the non-trainable parts of the model and focusing on higher precision for the small trainable matrices, QLoRA achieves extreme memory efficiency. This makes it possible to fine-tune extremely large models using commodity hardware, allowing for wider accessibility without compromising performance. --DIVIDER--<h2> Technical Implementation of QLoRA</h2> In this section, we’ll walk through the technical implementation of **QLoRA**, which combines **4-bit quantization** with **Low-Rank Adaptation** (LoRA) to achieve memory-efficient fine-tuning on large models. The goal is to quantize the frozen parameters of the model using **4-bit precision** while applying LoRA to specific layers, allowing the fine-tuning process to focus on a small set of trainable parameters. :::info{title="Info"} <h1>BitsAndBytes</h1> BitsAndBytes is a library designed to enable efficient quantization of large language models, reducing memory usage while maintaining model performance. It supports 4-bit and 8-bit quantization, allowing models to run on hardware with limited resources, such as consumer-grade GPUs. You can install it using : ``` pip install -U bitsandbytes ``` ::: The following code demonstrates how we load a pre-trained model, apply **4-bit quantization**, and then incorporate **LoRA** to fine-tune the model. ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer, BitsAndBytesConfig model_name = "openai-community/gpt2-large" tokenizer = AutoTokenizer.from_pretrained(model_name) if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token # Using eos_token as the pad_token if it's not defined # Step 1: Load the model using 4-bit quantization quantization_config = BitsAndBytesConfig( load_in_4bit=True, # Enable 4-bit quantization bnb_4bit_use_double_quant=True, # Use double quantization for accuracy bnb_4bit_compute_dtype=torch.float16, # Use FP16 for computation during training/inference bnb_4bit_quant_type="nf4", # Normal float 4-bit quantization ) # Step 2: Load the pre-trained model with the quantization configuration model = AutoModelForSequenceClassification.from_pretrained( model_name, quantization_config=quantization_config, # Pass the quantization config device_map="auto", # Automatically map model to available devices (e.g., GPU) ) # Set the padding token ID model.config.pad_token_id = tokenizer.pad_token_id # Step 3: Apply LoRA to the quantized model model = get_peft_model(model, lora_config) ``` <h3> Breakdown of the Code</h3> 1. **4-bit Quantization Configuration**: - We create a `BitsAndBytesConfig` to enable 4-bit quantization by setting `load_in_4bit=True`. This ensures that the frozen model parameters are stored in a highly compressed form, using only 4 bits per parameter. - **`bnb_4bit_use_double_quant=True`** enables double quantization for better accuracy, and **`bnb_4bit_compute_dtype=torch.float16`** ensures that the computations during training and inference are done in 16-bit floating-point precision (FP16). - The **`bnb_4bit_quant_type="nf4"`** specifies the quantization type as **normal float 4-bit** (NF4), which is known to provide better precision compared to standard 4-bit quantization methods. 2. **Loading the Pre-trained Model**: - The pre-trained model is loaded using `AutoModelForSequenceClassification.from_pretrained` and mapped to the appropriate device (GPU or CPU) using **`device_map="auto"`**. - The model’s frozen parameters are quantized to 4 bits, significantly reducing memory usage without sacrificing much accuracy. This allows for large models to be loaded on memory-limited devices, such as consumer-grade GPUs. 3. **Applying LoRA**: - After loading the quantized model, we apply **LoRA** using the `get_peft_model` function. This ensures that only a small set of trainable low-rank matrices is updated during fine-tuning, while the frozen, quantized weights remain untouched. - The result is a memory-efficient fine-tuning process that still retains the performance benefits of the original pre-trained model. <h3> Why This Implementation is Efficient:</h3> By combining 4-bit quantization with LoRA, QLoRA dramatically reduces the memory footprint required to fine-tune large models. The quantization of frozen weights ensures that memory usage is minimized, while LoRA allows fine-tuning to occur on a small set of trainable parameters, preserving performance while making fine-tuning feasible on hardware with limited resources. --DIVIDER--# PEFT in Action In this section, we demonstrate **Parameter-Efficient Fine-Tuning (PEFT)** in action by comparing the performance and efficiency of the different approaches: **Full Fine-Tuning**, **LoRA**, **DoRA**, and **QLoRA**. We trained each of these methods on the **SST-2 dataset** and captured both the **model performance** (e.g., accuracy) and **running time** to highlight the trade-offs between each approach. :::info{title="Info"} <h2>SST-2 (Stanford Sentiment Treebank)</h2> The **SST-2 (Stanford Sentiment Treebank)** dataset is a popular benchmark for **sentiment classification**. It consists of movie reviews, where each review is labeled as either **positive** or **negative**. The task involves classifying the sentiment of each review based on the text, making it a suitable dataset for evaluating the performance of natural language models. SST-2 is widely used for fine-tuning pre-trained models in NLP because of its simplicity and binary classification nature, providing a good baseline for comparing different model architectures and fine-tuning approaches. ::: We provide a [notebook](https://github.com/readytensor/rt_peft_publication/blob/master/peft.ipynb) showcasing the full training pipeline, including: - Loading the dataset and pre-trained models. - Applying each PEFT method. - Measuring training times and memory usage. - Evaluating the models' performance on the SST-2 dataset. <h3> Key Metrics</h3> - **Accuracy**: We evaluate how well each model performs in terms of sentiment classification on SST-2. - **Running Time**: This includes both training time and memory efficiency, particularly how PEFT methods reduce resource consumption while maintaining strong performance. - **Model Size**: For approaches like QLoRA, we observe significant reductions in model size due to quantization, allowing training on smaller hardware setups. By comparing the results from these different approaches, we can demonstrate the **efficiency** and **scalability** benefits of PEFT methods, particularly for large models where full fine-tuning becomes impractical. --- --DIVIDER--## Execution Time We compare the execution times of different fine-tuning approaches: **Full Fine-Tuning**, **LoRA**, **DoRA**, and **QLoRA**. To make the comparison easier to interpret, we've normalized the execution times, with **Full Fine-Tuning** set to 100%. The bar chart below illustrates the **relative execution times** for each approach. As expected, **Full Fine-Tuning** takes the longest time, since it updates all model parameters. In contrast, **LoRA**, **DoRA**, and **QLoRA** dramatically reduce execution times by focusing on a smaller set of parameters and applying techniques such as low-rank adaptation and quantization. ![execution_times.png](execution_times.png) - **LoRA** and **DoRA** achieve significant reductions in execution time by freezing most model parameters and training only the low-rank matrices. - **QLoRA** goes even further by applying 4-bit quantization to the frozen parameters, offering the most efficient execution time among the approaches. This comparison highlights how parameter-efficient methods like **LoRA**, **DoRA**, and **QLoRA** enable fast fine-tuning of large models, making them suitable for hardware with limited resources while maintaining competitive performance. --DIVIDER--## Model size We compare the model size of the **Original Model** with the size after applying **QLoRA**. To visualize this, we’ve created a diagram showing two circles representing the relative sizes of the models. The original model was **2.88GB**, and after applying QLoRA, the model size was reduced to just **0.46GB**—which is only **16%** of the original size, thanks to quantization and low-rank adaptation. This significant reduction in size comes with only a 4% drop in validation accuracy. The validation accuracy of full fine-tuning was 90%, while QLoRA achieved a comparable 86%, making this a highly efficient trade-off between model size and performance. The plot below illustrates the significant reduction in model size achieved through QLoRA: ![model_size_comparison_.png](model_size_comparison_.png) It’s important to note that methods like **LoRA** and **DoRA** do not directly affect the overall model size, as they primarily modify how the model is fine-tuned by freezing most of the parameters and introducing trainable low-rank matrices. However, **QLoRA** achieves a significant size reduction by quantizing the frozen weights, making it much more memory-efficient. --DIVIDER--## Training Loss & Validation Accuracy Now let's compare the training loss and validation accuracy of different fine-tuning approaches, including Full Fine-Tuning, LoRA, DoRA, and QLoRA. The plot below shows the training loss for each approach across multiple epochs. While Full Fine-Tuning achieves the lowest training loss, it doesn’t necessarily result in the best validation accuracy. In fact, LoRA demonstrates better validation accuracy, even though its training loss is slightly higher. ![loss_plot.png](loss_plot.png) ![accuracy_comparison.png](accuracy_comparison.png) This highlights a critical observation when working with smaller datasets: Full Fine-Tuning can lead to overfitting. It optimizes well on the training data (leading to lower training loss), but this can come at the cost of generalization to unseen validation data. On the other hand, methods like LoRA and QLoRA, which focus on updating fewer parameters, tend to generalize better, striking a balance between training performance and validation accuracy. By using parameter-efficient methods such as LoRA, we can avoid overfitting and achieve stronger validation performance, making these approaches particularly effective for fine-tuning on small datasets. --DIVIDER--# Conclusion In this article, we've explored several **Parameter-Efficient Fine-Tuning (PEFT)** approaches, including **LoRA**, **DoRA**, and **QLoRA**, and compared them to **Full Fine-Tuning**. Through our experiments, we observed key trade-offs in terms of model size, execution time, training loss, and validation accuracy. - **Full Fine-Tuning** delivered the lowest training loss, but it struggled with overfitting on smaller datasets, as shown by its lower generalization performance (validation accuracy). - **LoRA** and **DoRA** provided a significant reduction in training time and resource usage, with LoRA demonstrating better generalization and achieving higher validation accuracy than Full Fine-Tuning. - **QLoRA**, leveraging quantization and low-rank adaptation, offered the most memory-efficient fine-tuning approach, reducing model size by a staggering 84% while maintaining competitive accuracy. Overall, PEFT methods like LoRA and QLoRA offer a promising solution for fine-tuning large models on small datasets or limited hardware. They strike a balance between efficiency and performance, making them an attractive option for modern machine learning tasks. These findings demonstrate the value of adopting parameter-efficient methods, especially when dealing with limited resources, without sacrificing model performance.--DIVIDER--# References Hu, Edward J., et al. [LoRA: Low-Rank Adaptation of Large Language Models](https://arxiv.org/abs/2106.09685) Shih-Yang Liu, et al. [DoRA: Weight-Decomposed Low-Rank Adaptation](https://arxiv.org/abs/2402.09353) Tim Dettmers, et al. [QLoRA: Efficient Finetuning of Quantized LLMs](https://arxiv.org/abs/2305.14314) [BitsAndBytes: Optimizing Memory for Large Language Models](https://huggingface.co/docs/bitsandbytes/main/en/index)
sBFzhbX4GpeQ
3rdson
none
How to Build RAG Apps with Pinecone, OpenAI, Langchain and Python
![2023-11-Retrieval-augmented-generation-what-it-is-and-why-its-a-hot-topic-for-enterprise-AI-Blog-1.webp](2023-11-Retrieval-augmented-generation-what-it-is-and-why-its-a-hot-topic-for-enterprise-AI-Blog-1.webp) ## Pre-requisites : > 1. Before jumping into the discussion, it’s important to have a foundational understanding of **RAG**, which stands for **Retrieval Augmented Generation**. If you’re unfamiliar with this concept, you can read more about it [here](https://www.datacamp.com/blog/what-is-retrieval-augmented-generation-rag). > 2. To follow along with this tutorial, you need to install some libraries. So just create a requirements.txt file and put the information below there. ```python unstructured tiktoken pinecone-client pypdf openai langchain python-dotenv ``` > 3. Open your terminal or command prompt navigate to the directory containing your `requirements.txt` file and run `pip install -r requirements.txt` > This will install all the libraries listed in the `requirements.txt` file > 4. Create a `.env` file and put your **OpenAI** and **Pinecone API key** there just like I did in the code sample below > You can get your Pinecone API key [here](https://app.pinecone.io/organizations/-NuPZdGGQlmy8gJiXBOK/projects/3596318c-5320-4481-b0b5-54f46cfaf015/keys) and your OpenAI API key [here](https://platform.openai.com/api-keys) ```python OPENAI_API_KEY="your openAI api key here" PINECONE_API_KEY="your pinecone api key here" ``` > 5. Open the Python file you will be working with, write the following code there to load your environment variables ```python import os from dotenv import load_dotenv load_dotenv # Acessing the various API KEYS OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY") EMBEDDINGS = OpenAIEmbeddings(api_key=os.environ["OPENAI_API_KEY"]) PINECONE_API_KEY = os.getenv("PINECONE_API_KEY") ``` **Now you are good to go** ![1_7pw-tzH1lPiIr66RK8ZxeQ.webp](1_7pw-tzH1lPiIr66RK8ZxeQ.webp) ## Why Pinecone Is My Preferred Vector Database There are many vector databases to choose from while building RAG apps you learn more about them [here](https://www.datacamp.com/blog/the-top-5-vector-databases#5-of-the-best-vector-databases-in-2023-theli) but I will always suggest Pinecone because: > 1. Pinecone is a cloud-based vector database platform that has been purpose-built to tackle the unique challenges associated with high-dimensional data. > 2. It is already hosted so you don’t need to bother about hosting the database after building your application. > 3. It is a fully managed database that allows you to focus on building your RAG app rather than worrying about infrastructure such as RAM, ROM, and STORAGE, as well as how to scale it.It is highly scalable and allows real-time data ingestion with low-latency search. ### The two downsides of using Pinecone are that : > 1. It is not open source (but that is a small price to pay for salvation😅) > 2. Working with their latest serverless index feature together with Langchain can be stressful due to the lack of comprehensive documentation. **That’s why I’m writing this article. So that by following my steps and my code samples, you’ll be able to build RAG apps and easily adapt them to suit your needs.** ## Building RAG Apps: A Step-By-Step Guide To build any RAG application regardless of the Vector Database, Large Language Model(LLM), Embedding Model, or programming language, below are the steps you need to follow. > ## I have divided the steps into two parts: **Part 1:** Reading, processing and storing the data and vectors in a vector database. **Part 2:** Answering queries using the information in the vector database. # Below are the steps for Part 1: 1. Read the files (PDFs, txt, CSV, docs, etc.) where the texts are stored — This will help you to further work with them. 2. Divide the texts into chunks — So they can be fed into your embedding model. 3. Embed the chunked texts into vectors using the embedding model of your choice — So they can be stored in the vector database. 4. Combine the embeddings and the chunked text- So you can upsert them into the vector database. 5. Upsert/Push the vectors and text to the database — So you can query the database later. ## For the second part: 1. **Embed your query/question —** This will help convert your questions into vectors before querying the database. 2. **Query the database —** This is where the users send the query vectors to the vector database. 3. **Pass the answers from the vector database to your LLM —** This will help provide a better and more readable answer for your users. ## PART ONE: Reading, Processing and Storing the Data and Vectors in a Vector Database 1. **Read the Files (PDFs, Txt, CSV, Docs, etc.) Where the Texts are Stored** Most of the time when you are working on a RAG application, you will either have your text data in a Txt file, PDF, or any other suitable format. You will need to read it into your Python script so that you can perform further processing on it. >The sample code below is a function designed to read PDF files and display only the page content using the LangChain PyPDF library. However, you can replace it with any other library of your choice for reading PDF files or any other files. ```python from langchain_community.document_loaders.pdf import PyPDFDirectoryLoader def read_doc(directory: str) -> list[str]: "Function to read the PDFs from a directory. Args: directory (str): The path of the directory where the PDFs are stored. Returns: list[str]: A list of text in the PDFs. """ # Initialize a PyPDFDirectoryLoader object with the given directory file_loader = PyPDFDirectoryLoader(directory) # Load PDF documents from the directory documents = file_loader.load() # Extract only the page content from each document page_contents = [doc.page_content for doc in documents] return page_contents # Call the function full_document = read_doc("the folder path where your pdfs are stored/") ``` > One common issue you might encounter is related to corrupted or improperly formatted PDF files. The simplest way to troubleshoot this problem is to identify the problematic PDF file systematically. Here’s how you can approach it: > First, organize your PDF collection into folders, each containing a manageable number of files, say 10, 20, 50, or 100, depending on the size of your collection. Then, run the processing function on each folder separately, passing the folder name as an argument. > If the function executes successfully without any errors for a particular folder, it suggests that the problematic PDF is not present in that folder. However, if the function encounters an error while processing a specific folder, it indicates that the issue lies within that folder. > To pinpoint the exact PDF causing the error, you can recursively divide the problematic folder into smaller subsets and repeat the process. Continue dividing the subsets until you isolate the specific PDF file that’s causing the error. > Once identified, you have two options: either delete the problematic PDF or attempt to rectify the issue by copying and pasting its content into a new PDF file. > This methodical approach of dividing and testing your PDF collection systematically allows you to efficiently identify and address any errors encountered while processing multiple PDF files otherwise you can try using any other PDF reader.One common issue you might encounter is related to corrupted or improperly formatted PDF files. The simplest way to troubleshoot this problem is to identify the problematic PDF file systematically. Here’s how you can approach it: > First, organize your PDF collection into folders, each containing a manageable number of files, say 10, 20, 50, or 100, depending on the size of your collection. Then, run the processing function on each folder separately, passing the folder name as an argument. > If the function executes successfully without any errors for a particular folder, it suggests that the problematic PDF is not present in that folder. However, if the function encounters an error while processing a specific folder, it indicates that the issue lies within that folder. > To pinpoint the exact PDF causing the error, you can recursively divide the problematic folder into smaller subsets and repeat the process. Continue dividing the subsets until you isolate the specific PDF file that’s causing the error. > Once identified, you have two options: either delete the problematic PDF or attempt to rectify the issue by copying and pasting its content into a new PDF file. > This methodical approach of dividing and testing your PDF collection systematically allows you to efficiently identify and address any errors encountered while processing multiple PDF files otherwise you can try using any other PDF reader. ## 2. Divide the Texts into Chunks After successfully reading the PDF files, the next step is to divide the text into smaller chunks. This step is crucial because the chunked texts will be passed into the embedding model for processing. Breaking down the texts into manageable chunks serves several purposes. First, it ensures that the embedding model can efficiently process the information without overwhelming its capacity. Many embedding models have limits on the input size they can handle, so dividing the texts into smaller pieces ensures compatibility. Additionally, chunking the texts allows for more granular representation and retrieval of information. By breaking down the content into logical segments, you can associate specific information with its corresponding chunk, enabling more precise and relevant responses to queries. The chunking process can be tailored to your specific needs and the nature of your data. For example, you might choose to divide the texts based on sections, paragraphs, or even sentence boundaries, depending on the level of granularity required for your application. It’s important to strike a balance between chunk size and information completeness. Smaller chunks may provide more granular information but may lack context, while larger chunks may provide more context but could be less precise in pinpointing specific details. > You can learn more about chunk size and how to chunk texts [here](https://towardsdatascience.com/how-to-chunk-text-data-a-comparative-analysis-3858c4a0997a?gi=09d77fc91e12). > The sample code below is a function designed to chunk your PDFs, each chunk having a maximum chunk size of 1000. ```python def chunk_text_for_list(docs: list[str], max_chunk_size: int = 1000) -> list[list[str]]: """ Break down each text in a list of texts into chunks of a maximum size, attempting to preserve whole paragraphs. :param docs: The list of texts to be chunked. :param max_chunk_size: Maximum size of each chunk in characters. :return: List of lists containing text chunks for each document. """ def chunk_text(text: str, max_chunk_size: int) -> List[str]: # Ensure each text ends with a double newline to correctly split paragraphs if not text.endswith("\n\n"): text += "\n\n" # Split text into paragraphs paragraphs = text.split("\n\n") chunks = [] current_chunk = "" # Iterate over paragraphs and assemble chunks for paragraph in paragraphs: # Check if adding the current paragraph exceeds the maximum chunk size if ( len(current_chunk) + len(paragraph) + 2 > max_chunk_size and current_chunk ): # If so, add the current chunk to the list and start a new chunk chunks.append(current_chunk.strip()) current_chunk = "" # Add the current paragraph to the current chunk current_chunk += paragraph.strip() + "\n\n" # Add any remaining text as the last chunk if current_chunk: chunks.append(current_chunk.strip()) return chunks # Apply the chunk_text function to each document in the list return [chunk_text(doc, max_chunk_size) for doc in docs] # Call the function chunked_document = chunk_text_for_list(docs=full_document) ``` > When you call this function, it should return a list containing a list of strings. If you decide not to use my function, just head over to [this article](https://towardsdatascience.com/how-to-chunk-text-data-a-comparative-analysis-3858c4a0997a) to see other ways you can chunk your text data to suit your needs. ## 3. Embed the Chunked Texts Into Vectors Using the Embedding Model of Your Choice Now that you have chunked the texts into smaller segments, the next step is to pass these chunks through an embedding model to obtain their vector representations. The embedding model maps the textual information into high-dimensional vector spaces, where semantic similarities and relationships are preserved. The choice of embedding model can vary based on your requirements and preferences. Some popular options include pre-trained models like BERT, GPT, or specialized models tailored for specific domains or tasks. > The function below generates the vector embeddings for the chunked texts using *“text-embedding-ada-002"* from OpenAIEmbeddings but you can use any other embedding model of your choice. You can learn more about OpenAI Embeddings and pricing [here](https://openai.com/api/pricing/). ```python from langchain.embeddings.openai import OpenAIEmbeddings def generate_embeddings(documents: list[any]) -> list[list[float]]: """ Generate embeddings for a list of documents. Args: documents (list[any]): A list of document objects, each containing a 'page_content' attribute. Returns: list[list[float]]: A list containig a list of embeddings corresponding to the documents. """ embedded = [EMBEDDINGS.embed_documents(doc) for doc in documents] return embedded # Run the function chunked_document_embeddings = generate_embeddings(documents=chunked_document) # Let's see the dimension of our embedding model so we can set it up later in pinecone print(len(chunked_document_embeddings) ``` > While executing this function, you should not encounter any errors. However, if you do face issues, please check if your chunked text is a list of strings and not any other data type. The `embed_documents` method expects a list of strings as input, and providing any other data type may result in an error. ## 4. Combine the Embeddings and the Chunked Text Now that you have your embeddings ready, you need to combine the embeddings and the chunked text so that you can upsert them to the database. Additionally, you need a unique ID for each chunk to identify and associate the relevant information. > In the first function below, I used the `sha256` algorithm from `hashlib` to create a unique ID for each of the chunks. If you don’t know what `sha256` does, you can check this [article](https://www.simplilearn.com/tutorials/cyber-security-tutorial/sha-256-algorithm). > I now called the first function inside the second function so that I could create the unique ID and afterwards create a dictionary containing the `embeddings`, `unique IDs` and `metadata.` > Also, I used `"values": embeddings[0]` because our embedding is stored in a list of a list and I only need the inner list to be passed into the Pinecone’s upsert function later so that is why I used `embeddings[0].` If your embedding is in a list and not a list of a list, you can simply use embeddings. ```python import hashlib def generate_short_id(content: str) -> str: """ Generate a short ID based on the content using SHA-256 hash. Args: - content (str): The content for which the ID is generated. Returns: - short_id (str): The generated short ID. """ hash_obj = hashlib.sha256() hash_obj.update(content.encode("utf-8")) return hash_obj.hexdigest() def combine_vector_and_text( documents: list[any], doc_embeddings: list[list[float]] ) -> list[dict[str, any]]: """ Process a list of documents along with their embeddings. Args: - documents (List[Any]): A list of documents (strings or other types). - doc_embeddings (List[List[float]]): A list of embeddings corresponding to the documents. Returns: - data_with_metadata (List[Dict[str, Any]]): A list of dictionaries, each containing an ID, embedding values, and metadata. """ data_with_metadata = [] for doc_text, embedding in zip(documents, doc_embeddings): # Convert doc_text to string if it's not already a string if not isinstance(doc_text, str): doc_text = str(doc_text) # Generate a unique ID based on the text content doc_id = generate_short_id(doc_text) # Create a data item dictionary data_item = { "id": doc_id, "values": embedding[0], "metadata": {"text": doc_text}, # Include the text as metadata } # Append the data item to the list data_with_metadata.append(data_item) return data_with_metadata # Call the function data_with_meta_data = combine_vector_and_text(documents=chunked_document, doc_embeddings=chunked_document_embeddings) ``` By combining the embeddings, unique_ID and text before upserting, you streamline the retrieval process and ensure the relevant text is readily available alongside similar embeddings found during searches. This approach simplifies the overall process and potentially improves efficiency by leveraging the vector database’s optimized storage and retrieval mechanisms. ## 5. Upload/Push the Vectors and Text to the Database Now that you have your embeddings, unique IDs and chunked data ready, you need to push(upsert) them to Pinecone. > Note: The embeddings are used for efficient similarity search, while the text is the original content retrieved when a relevant match is found during the search. ## To achieve this, you need to take the following steps: 1. Login to [pinecone.io](https://www.pinecone.io/) 2. Create a [serverless index](https://app.pinecone.io/organizations/-NuPZdGGQlmy8gJiXBOK/projects/3596318c-5320-4481-b0b5-54f46cfaf015/create-index/serverless) > Note: While creating an index, you need to specify your **index name**(the name you want to give your index), **the metrics**(you can select cosine) and **the dimension of your embedding model**(for **text-embedding-ada-002,** it is **1536**). > If you are using any other model, endeavour to find the dimension of your embedding model and input it as your dimension. You can know the dimension from the `len` function we ran after embedding the chunked data or you can simply google it. > **Now in your Python file, connect to the index using the code below** ```python from pinecone import Pinecone pc = Pinecone(api_key=PINECONE_API_KEY) index = pc.Index("write the name of your index here") ``` After you have connected your index, you can proceed to store (upsert) the vectors, unique IDs, and the corresponding chunked texts in the vector database. > **You can use the function below to `upsert` the data to Pinecone** ```python def upsert_data_to_pinecone(data_with_metadata: list[dict[str, any]]) -> None: """ Upsert data with metadata into a Pinecone index. Args: - data_with_metadata (List[Dict[str, Any]]): A list of dictionaries, each containing data with metadata. Returns: - None """ index.upsert(vectors=data_with_metadata) # Call the function upsert_data_to_pinecone(data_with_metadata= data_with_meta_data) ``` > Note: There is a size limit on the data that can be upserted into Pinecone at once (around 4MB), so don’t try to upsert your whole data in a single operation. Instead, partition your data into smaller batches and upsert them sequentially. Now that you have completed the first part of the process, which is the main work for the RAG (Retrieval-Augmented Generation) app, the next step is to query the vector database and retrieve relevant information from it. We can now head over to the second part which involves: Answering queries using the information in the vector database. # PART TWO: Answering Queries Using the Information in the Vector Database ## 1. Embed Your Query/Question Before you can send a question or query to the database, you need to embed it, just like you embedded the documents. The vector obtained from embedding the question will then be sent to the database, and using similarity search, the most relevant information will be retrieved. The process of embedding the query is similar to how you embed the text chunks during the data preparation stage. You’ll use the same embedding model to generate a vector representation of the query, capturing its semantic meaning and context. It’s important to use the same embedding model and configuration that you used for embedding the text chunks. Consistency in the embedding process ensures that the query embedding and the stored embeddings reside in the same vector space, enabling meaningful comparisons and similarity calculations. Once you have the query embedding, you can proceed to the next step of sending it to the vector database for similarity search and retrieval of relevant information. Below is a function you can use to embed your query ```python def get_query_embeddings(query: str) -> list[float]: """This function returns a list of the embeddings for a given query Args: query (str): The actual query/question Returns: list[float]: The embeddings for the given query """ query_embeddings = EMBEDDINGS.embed_query(query) return query_embeddings # Call the function query_embeddings = get_query_embeddings(query="Your question goes here") ``` > If you noticed, here I used `EMBEDDINGS.embed_query()` but when I was embedding the chunked document I used `EMBEDDINGS.embed_documents().` This is because `EMBEDDINGS.embed_documents()` is used for a list of texts and our chunked document is a list of texts while `EMBEDDINGS.embed_query()` is used for queries. You can read more about it here on [Langchain Docs](https://python.langchain.com/docs/modules/data_connection/text_embedding/). ## 2. Query the Database After you have embedded the question/query, you need to send the query embeddings to the Pinecone database, where they will be used for similarity search and retrieval of relevant information. The query embeddings serve as the basis for finding the most similar embeddings stored in the database. Pinecone provides efficient similarity search capabilities, allowing you to query the vector database with the query embedding and retrieve the top-k most similar embeddings, along with their associated metadata (in this case, the chunked texts). > Below is a function you can use to query the Pinecone database. It returns a list of dictionaries containing the Unique ID, the metadata (chunked text), the score and the values. ```python def query_pinecone_index( query_embeddings: list, top_k: int = 2, include_metadata: bool = True ) -> dict[str, any]: """ Query a Pinecone index. Args: - index (Any): The Pinecone index object to query. - vectors (List[List[float]]): List of query vectors. - top_k (int): Number of nearest neighbors to retrieve (default: 2). - include_metadata (bool): Whether to include metadata in the query response (default: True). Returns: - query_response (Dict[str, Any]): Query response containing nearest neighbors. """ query_response = index.query( vector=query_embeddings, top_k=top_k, include_metadata=include_metadata ) return query_response # Call the function answers = query_pinecone_index(query_embeddings=query_embeddings) ``` The `top_k` parameter determines how many top similar embeddings and associated texts to retrieve from the Pinecone database. A higher `top_k` value yields more potential answers but increases the risk of irrelevant results, while a lower value yields fewer but more precise answers. Choose `top_k` judiciously based on your needs. For complex/diverse queries needing multiple perspectives, a higher `top_k` may be better. For specific/focused queries, a lower `top_k` prioritizing precision over recall might be preferable. Experiment with different `top_k` values and evaluate the relevance and usefulness of retrieved information, considering dataset size and diversity. A larger, more varied dataset may benefit from a higher `top_k,` while a smaller, focused dataset could perform well with a lower `top_k.` Continuously assess `top_k`’s impact on response quality to optimize your RAG app’s performance in providing relevant and comprehensive responses. ## 3. Pass the Answers From the Vector Database to Your LLM Now that you have obtained a dictionary containing the answer, you need to extract the answer text from the dictionary and pass it through a Large Language Model (LLM) to generate a better and more coherent response. > Below is the code and function that can help you extract the text from the dictionary and then pass it into the function together with a prompt. ```python from langchain.llms import OpenAI from langchain.prompts import PromptTemplate LLM = OpenAI(temperature=0, model_name="gpt-3.5-turbo-instruct") # Adjust the temperature to your taste # Extract only the text from the dictionary before passing it to the LLM text_answer = " ".join([doc['metadata']['text'] for doc in query_response['matches']]) prompt = f"{text_answer} Using the provided information, give me a better and summarized answer" def better_query_response(prompt: str) -> str: """This function returns a better response using LLM Args: prompt (str): The prompt template Returns: str: The actual response returned by the LLM """ better_answer = LLM(prompt) # Call the function final_answer = better_query_response(prompt=prompt) ``` > In the sample code above, I used a simple prompt. However, you can enhance the response quality by adjusting the prompt using a prompt template and system prompt. These provide the LLM with additional context and instructions on how to behave. > A prompt template structures the prompt, specifying the task, context, and desired response format. A system prompt sets the overall tone, persona, or behaviour the LLM should adopt. > Combining a well-crafted prompt template and system prompt gives the LLM more context, leading to more coherent and relevant responses aligned with your application’s needs. However, crafting effective prompts requires experimentation and fine-tuning for the specific use case and LLM capabilities. > Let me just tell you this: what makes your RAG app stand out is the prompting, as it has a 90% chance of determining the quality of responses you get from the LLM so learn how to prompt properly. Now that you have tested and validated your RAG app, you can build APIs for it using any framework of your choice. Building APIs will enable seamless integration of your RAG app’s capabilities, such as querying the vector database, retrieving relevant information, and generating responses using the LLM, with other applications or user interfaces. Popular web frameworks like Flask, Django, FastAPI, or Express.js can be used to develop robust and scalable RESTful or GraphQL APIs. Exposing your RAG app through well-designed APIs will unlock its potential for a wide range of applications. > Note: While building your RAG app, the potential errors I mentioned earlier were the ones I encountered. However, by following the code samples provided, you should not face any issues, as these were the solutions I implemented to rectify the errors. The only potential error you might encounter is during the PDF reading process, which could be caused by improperly formatted PDF files. Nonetheless, by adhering to the outlined steps, you should be able to resolve such issues effectively. > HAPPY RAGING🤗🚀 You can always reach me on [X 3rdSon__](https://x.com/3rdSon__) [LinkedIn Victory Nnaji](https://www.linkedin.com/in/3rdson/) [GitHub 3rd-Son](https://www.GitHub.com/3rd-Son)
SBgkOyUsP8qQ
ready-tensor
mit
Engage and Inspire: Best Practices for Publishing on Ready Tensor
![project-presentation-cropped.jpeg](project-presentation-cropped.jpeg) <p align="center">Image Credit: <a href="https://www.freepik.com/">Freepik</a></p>--DIVIDER-- # TL;DR This guide outlines best practices for creating compelling AI and data science publications on Ready Tensor. It covers selecting appropriate publication types, assessing technical content quality, structuring information effectively, and enhancing readability through proper formatting and visuals. By following these guidelines, authors can create publications that effectively showcase their work's value to the AI community. </br> -----DIVIDER--# Quick Guide for Competition Participants If you are participating in a Ready Tensor publication competition, follow these steps to efficiently use this guide: :::info{title="Competition Navigation Path"} **Step 1: Identify Your Project Type** → Go to Section 2.2 - Ready Tensor Project Types - Review the comprehensive table of project types - Select the category that best matches your work **Step 2: Choose Your Presentation Style** → Go to Sections 2.4 and 2.5 - Learn about different presentation styles - Use the project-style matching grid to select the most effective approach **Step 3: Understand Assessment Criteria** → Go to Appendix B - Review the technical assessment criteria for your project type - Check Appendix A for detailed explanations of each criterion - Use this as your checklist - these are the criteria our judges use for reference! **Step 4: Enhance Your Presentation** → Go to Section 5 - Learn best practices for readability and visual appeal - Apply these tips to make your publication stand out ::: _This quick guide helps you focus on the most essential sections for competition preparation. For comprehensive understanding, we recommend reading the entire guide when time permits._--DIVIDER--</br> # 1. Introduction The AI and data science community is expanding rapidly, encompassing students, practitioners, researchers, and businesses. As projects in this field multiply, their success hinges not only on the quality of work but also on effective presentation. This guide aims to help you showcase your work optimally on Ready Tensor. It covers the core tenets of good project presentation, types of publishable projects, selecting appropriate presentation styles, structuring your content, determining information depth, enhancing readability, and ensuring your project stands out. Throughout this guide, you'll learn to present your work in a way that engages and inspires your audience, maximizing its impact in the AI and data science community.--DIVIDER--## 1.1 Guide Purpose and Scope This guide is designed to help AI and data science professionals effectively showcase their projects on the Ready Tensor platform. Whether you're a seasoned researcher, an industry practitioner, or a student entering the field, presenting your work clearly and engagingly is crucial for maximizing its impact and visibility. The purpose of this guide is to: 1. Provide a comprehensive framework for structuring and presenting AI projects. 2. Offer best practices for creating clear, compelling, and informative project documentation. 3. Help users leverage Ready Tensor's features to enhance their project presentations. We cover a range of topics, including: - [x] Selecting the appropriate project type and presentation style - [x] Crafting effective metadata to improve discoverability - [x] Structuring your content for optimal readability and engagement - [x] Enhancing your presentation with visuals and multimedia - [x] Ensuring your project is accessible to a wide audience By following the guidelines presented here, you'll be able to create project showcases that not only effectively communicate your work's technical merit but also capture the attention of your target audience, whether they're potential collaborators, employers, or fellow researchers. This guide is not a technical manual for conducting AI research or developing models. Instead, it focuses on the crucial skill of presenting your completed work in the most impactful way possible on the Ready Tensor platform.--DIVIDER--## 1.2 Importance of Effective Presentation An effectively presented project can: - **Attract Attention**: Stand out in a crowded field, capturing interest from peers and stakeholders. - **Facilitate Understanding**: Help your audience quickly grasp complex ideas and methodologies. - **Encourage Engagement**: Foster discussions, collaborations, and feedback from the community. - **Enhance Credibility**: Showcase your professionalism and attention to detail. - **Maximize Impact**: Increase the reach and influence of your work in the AI and data science fields. By investing time in thoughtful presentation, you demonstrate not only technical skills but also effective communication—a critical professional asset. Remember, even groundbreaking ideas can go unnoticed if not presented well.--DIVIDER--# 2. Foundations of Effective Project Presentation This section covers the core tenets of great projects, Ready Tensor project types, and how to select the right presentation approach.--DIVIDER--## 2.1 Core Tenets of Great Projects To create a publication that truly resonates with your audience, focus on these core tenets: --DIVIDER-- ![core-tenets.png](core-tenets.png)--DIVIDER-- Let's expand on each of these tenets: - **Clarity**: Present your ideas in a straightforward, easily understood manner. Use simple language, organize your content logically, and explain complex concepts concisely. Clear communication ensures your audience can follow your work without getting lost in technical jargon. - **Completeness**: Provide comprehensive coverage of your project, including all essential aspects. Offer necessary context and include relevant references. A complete presentation gives your audience a full understanding of your work and its significance. - **Relevance**: Ensure your content is pertinent to your audience and aligns with current industry trends. Target your readers' interests and highlight practical applications of your work. Relevant content keeps your audience engaged and demonstrates the value of your project. - **Engagement**: Make your presentation captivating through varied and visually appealing content. Use visuals to illustrate key points, vary your content format, and tell a compelling story with your data. An engaging presentation holds your audience's attention and makes your work memorable. By adhering to these core tenets, you'll create a project presentation that not only communicates your ideas effectively but also captures and maintains your audience's interest. Remember, a well-presented project is more likely to make a lasting impact in the AI and data science community.--DIVIDER--:::tip{title="Tip"} <h2> Addressing Originality and Impact of Your Work </h2> In addition to these four key tenets, consider addressing the originality and impact of your work. While Ready Tensor doesn't strictly require originality like academic journals or conferences, highlighting what sets your project apart can increase its value to readers. Similarly, discussing the potential effects of your work on industry, academia, or society helps readers grasp its significance. These aspects, when combined with the core tenets, create a comprehensive and compelling project presentation. ::: -----DIVIDER--</br> ## 2.2 Project Types on Ready Tensor Ready Tensor supports various project types to accommodate different kinds of AI and data science work. Understanding these types and appropriate presentation styles will help you showcase your work effectively. The following chart lists the common project types:--DIVIDER-- ![project-types4.png](project-types4.png)--DIVIDER-- The following table describes each project type in detail, including the publication category, publication type, and a brief description along with examples: | Publication Category | Publication Type | Description | Examples | | -------------------------------- | ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------- | | Research & Academic Publications | Research Paper | Original research contributions presenting novel findings, methodologies, or analyses in AI/ML. Must include comprehensive literature review and clear novel contribution to the field. Demonstrates academic rigor through systematic methodology, experimental validation, and critical analysis of results. | • "Novel Attention Mechanism for Improved Natural Language Processing" <br>• "A New Framework for Robust Deep Learning in Adversarial Environments" | | Research & Academic Publications | Research Summary | Accessible explanations of specific research work(s) that maintain scientific accuracy while making the content more approachable. Focuses on explaining key elements and significance of original research rather than presenting new findings. Includes clear identification of original research and simplified but accurate descriptions of methodology. | • "Understanding GPT-4: A Clear Explanation of its Architecture" <br>• "Breaking Down the DALL-E 3 Paper: Key Innovations and Implications" | | Research & Academic Publications | Benchmark Study | Systematic comparison and evaluation of multiple models, algorithms, or approaches. Focuses on comprehensive evaluation methodology with clear performance metrics and fair comparative analysis. Includes detailed experimental setup and reproducible testing conditions. | • "Performance Comparison of Top 5 LLMs on Medical Domain Tasks" <br>• "Resource Utilization Study: PyTorch vs TensorFlow Implementations" | | Educational Content | Academic Solution Showcase | Projects completed as part of coursework, self-learning, or competitions that demonstrate application of AI/ML concepts. Focuses on learning outcomes and skill development using standard datasets or common ML tasks. Documents implementation approach and key learnings. | • "Building a CNN for Plant Disease Detection: A Course Project" <br>• "Implementing BERT for Sentiment Analysis: Kaggle Competition Entry" | | Educational Content | Blog | Experience-based articles sharing insights, tips, best practices, or learnings about AI/ML topics. Emphasizes practical knowledge and real-world perspectives based on personal or team experience. Includes authentic insights not found in formal documentation. | • "Lessons Learned from Deploying ML Models in Production" <br>• "5 Common Pitfalls in Training Large Language Models" | | Educational Content | Technical Deep Dive | In-depth, pedagogical explanations of AI/ML concepts, methodologies, or best practices with theoretical foundations. Focuses on building deep technical understanding through theory rather than implementation. Includes mathematical concepts and practical implications. | • "Understanding Transformer Architecture: From Theory to Practice" <br>• "Deep Dive into Reinforcement Learning: Mathematical Foundations" | | Educational Content | Technical Guide | Comprehensive, practical explanations of technical topics, tools, processes, or practices in AI/ML. Focuses on practical understanding and application without deep theoretical foundations. Includes best practices, common pitfalls, and decision-making frameworks. | • "ML Model Version Control Best Practices" <br>• "A Complete Guide to ML Project Documentation Standards" | | Educational Content | Tutorial | Step-by-step instructional content teaching specific AI/ML concepts, techniques, or tools. Emphasizes hands-on learning with clear examples and code snippets. Includes working examples and troubleshooting tips. | • "Building a RAG System with LangChain: Step-by-Step Guide" <br>• "Implementing YOLO Object Detection from Scratch" | | Real-World Applications | Applied Solution Showcase | Technical implementations of AI/ML solutions solving specific real-world problems in industry contexts. Focuses on technical architecture, implementation methodology, and engineering decisions. Documents specific problem context and technical evaluations. | • "Custom RAG Implementation for Legal Document Processing" <br>• "Building a Real-time ML Pipeline for Manufacturing QC" | | Real-World Applications | Case Study | Analysis of AI/ML implementations in specific organizational contexts, focusing on business problem, solution approach, and impact. Documents complete journey from problem identification to solution impact. Emphasizes business context over technical details. | • "AI Transformation at XYZ Bank: From Legacy to Innovation" <br>• "Implementing Predictive Maintenance in Aircraft Manufacturing" | | Real-World Applications | Technical Product Showcase | Presents specific AI/ML products, platforms, or services developed for user adoption. Focuses on features, capabilities, and practical benefits rather than implementation details. Includes use cases and integration scenarios. | • "IntellAI Platform: Enterprise-grade ML Operations Suite" <br>• "AutoML Pro: Automated Model Training and Deployment Platform" | | Real-World Applications | Solution Implementation Guide | Step-by-step guides for implementing specific AI/ML solutions in production environments. Focuses on practical deployment steps and operational requirements. Includes infrastructure setup, security considerations, and maintenance guidance. | • "Production Deployment Guide for Enterprise RAG Systems" <br>• "Setting Up MLOps Pipeline with Azure and GitHub Actions" | | Real-World Applications | Industry Report | Analytical reports examining current state, trends, and impact of AI/ML adoption in specific industries. Provides data-driven insights about adoption patterns, challenges, and success factors. Includes market analysis and future outlook. | • "State of AI in Financial Services 2024" <br>• "ML Adoption Trends in Healthcare: A Comprehensive Analysis" | | Real-World Applications | White Paper | Strategic documents proposing approaches to industry challenges using AI/ML solutions. Focuses on problem analysis, solution possibilities, and strategic recommendations. Provides thought leadership and actionable recommendations. | • "AI-Driven Digital Transformation in Banking" <br>• "Future of Healthcare: AI Integration Framework" | | Technical Assets | Dataset Contribution | Creation and publication of datasets for AI/ML applications. Focuses on data quality, comprehensive documentation, and usefulness for specific ML tasks. Includes collection methodology, preprocessing steps, and usage guidelines. | • "MultiLingual Customer Service Dataset: 1M Labeled Conversations" <br>• "Medical Image Dataset for Anomaly Detection" | | Technical Assets | Open Source Contribution | Contributions to existing open-source AI/ML projects. Focuses on collaborative development and community value. Includes clear description of changes, motivation, and impact on the main project. | • "Optimizing Inference Speed in Hugging Face Transformers" <br>• "Adding TPU Support to Popular Deep Learning Framework" | | Technical Assets | Tool/App/Software | Introduction and documentation of specific software implementations utilizing AI/ML. Focuses on tool's utility, functionality, and practical usage rather than theoretical foundations. Includes comprehensive usage information and technical specifications. | • "FastEmbed: Efficient Text Embedding Library" <br>• "MLMonitor: Real-time Model Performance Tracking Tool" | --DIVIDER--## 2.3 Selecting Type for Your Project You can choose the most suitable project type by considering these key factors: **1. Primary Focus of Your Project** Identify the main contribution or core content of your work. Examples include: - **Original Research**: Presenting new findings or theories. - **Real-World Application**: Describing a practical solution for a real-world problem. - **Data Analysis**: Extracting insights from datasets. - **Software Tool**: Developing applications or utilities. - **Educational Content**: Providing tutorials or instructional guides. **2. Objective for Publishing** Clarify what you aim to achieve by sharing your project. Common objectives include: - **Advance Knowledge**: Contributing to academic discourse. - **Share Practical Solutions**: Demonstrating applications of methods. - **Educate Others**: Teaching specific skills or concepts. - **Showcase Skills**: Highlighting expertise for professional opportunities. **3. Target Audience** Determine who will benefit most from your project. Potential audiences include: - **Researchers and Academics** - **Students and Educators** - **Industry Practitioners** - **Potential Employers** - **AI/ML Enthusiasts** Based on these considerations, select the project type that best aligns with your work. Remember, the project type serves as a primary guide but doesn't limit the scope of your content. Use tags to highlight additional aspects of your project that may not be captured by the primary project type.--DIVIDER--## 2.4 Presentation Styles Choosing the right presentation style is crucial for effectively communicating your project's content and engaging your target audience. See the following chart for various styles for presenting your project work.--DIVIDER-- ![presentation-styles.png](presentation-styles.png)--DIVIDER--Let's review the styles in more detail: • **Narrative**: This style weaves your project into a compelling story, making it accessible and engaging. It's particularly effective for showcasing the evolution of your work, from initial challenges to final outcomes. • **Technical**: Focused on precision and detail, the technical style is ideal for projects that require in-depth explanations of methodologies, algorithms, or complex concepts. It caters to audiences seeking thorough understanding. • **Visual**: By prioritizing graphical representations, the visual style makes complex data and ideas more digestible. It's particularly powerful for illustrating trends, comparisons, and relationships within your project. • **Instructional**: This style guides the audience through your project step-by-step. It's designed to facilitate learning and replication, making it ideal for educational content or showcasing reproducible methods. • **Mixed**: Combining elements from other styles, the mixed approach offers versatility. It allows you to tailor your presentation to diverse aspects of your project and cater to varied audience preferences. We will now explore how to match the project type and presentation style to your project effectively.--DIVIDER--## 2.5 Matching Presentation Styles to Project Types Different project types often lend themselves to certain presentation styles. While there's no one-size-fits-all approach, the following grid can guide you in selecting the most appropriate style(s) for your project:--DIVIDER-- ![project_presentation_grid-v2.svg](project_presentation_grid-v2.svg)--DIVIDER--Remember, this grid is a guide, not a strict rule. Your unique project may benefit from a creative combination of styles. --DIVIDER--:::info{title="Info"} <h2> Note on Presentation Styles: </h2> While research papers, benchmark studies, and technical deep dives are primarily technical in nature, Ready Tensor encourages incorporating visual elements to enhance understanding and reach a broader audience. A Visual style can be effectively used in these publication types through: - Infographics summarizing complex methodologies - Data visualizations illustrating results - Graphical abstracts highlighting key findings - Architecture diagrams explaining system design - Flow charts depicting processes - Comparative visualizations for benchmark results The goal is to make technical content more accessible without compromising scientific rigor. This approach helps bridge the gap between technical depth and public engagement, allowing publications to serve both expert and general audiences effectively. The platform supports both traditional technical presentations and visually enhanced versions to accommodate different learning styles and improve content accessibility. For research summaries in particular, visual elements are highly encouraged as they help communicate complex research findings to a broader audience. :::--DIVIDER-- -----DIVIDER--</br> # 3. Creating Your Publication Now that you understand the foundational principles of effective project presentation, it’s time to bring your work to life. This section will guide you through crafting a well-structured, visually appealing, and engaging publication that maximizes the impact of your AI/ML project on Ready Tensor.--DIVIDER-- ## 3.1 Essential Project Metadata Metadata plays a critical role in making your project discoverable and understandable. Here’s how to ensure your project’s metadata is clear and compelling: **Choosing a Compelling Title**: Your title should be concise yet descriptive, capturing the core contribution of your work. Aim for a title that sparks curiosity while clearly reflecting the project’s focus. **Selecting Appropriate Tags**: Tags help users find your project. Choose tags that accurately represent the project’s content, methods, and application areas. Prioritize terms that are both relevant and commonly searched within your domain. **Picking the Right License**: Select an appropriate license from the dropdown to specify how others can use your work. Consider licenses like MIT or GPL based on your goals, ensuring it aligns with your project’s intended use. **Authorship**: Clearly list all contributors, recognizing those who played significant roles in the project. Include affiliations where relevant to establish credibility and traceability of contributions. **Abstract or TL;DR**: Provide a concise summary of your project, focusing on its key contributions, methodology, and impact. Keep it brief but informative, as this is often the first thing readers will see to gauge the relevance of your work. Place this at the beginning of your publication to provide a quick overview. This section is crucial in setting the stage for how your project will be perceived, so invest time to make it both informative and engaging.--DIVIDER--## 3.2 Structuring Your Publication Each project type has a standard structure that helps readers navigate your content. Below are typical sections to include based on the type of project you are publishing. Note that the abstract or tl;dr is mandatory and is part of the project metadata. --DIVIDER--<h3>Research Paper</h3> - Introduction ➜ Literature Review ➜ Methodology ➜ Results ➜ Discussion ➜ Conclusion ➜ Future Work ➜ References <h3>Research Summary</h3> - Original Research Context ➜ Key Concepts ➜ Methodology Summary ➜ Main Findings ➜ Implications ➜ References <h3>Benchmark Study</h3> - Introduction ➜ Literature Review ➜ Datasets ➜ Models/Algorithms ➜ Experiment Design ➜ Results ➜ Discussion ➜ Conclusion ➜ References <h3>Academic Solution Showcase</h3> - Introduction ➜ Problem Statement ➜ Data Collection ➜ Methodology ➜ Results ➜ Discussion ➜ Conclusion ➜ References ➜ Acknowledgments <h3>Blog</h3> - Flexible structure due to narrative style <h3>Technical Deep Dive</h3> - Introduction ➜ Theoretical Foundation ➜ Technical Analysis ➜ Practical Implications ➜ Discussion ➜ References <h3>Technical Guide</h3> - Overview ➜ Core Concepts ➜ Technical Explanations ➜ Key Insights ➜ References <h3>Tutorial</h3> - Introduction ➜ Prerequisites ➜ Step-by-Step Instructions (with code snippets) ➜ Explanations ➜ Conclusion ➜ Additional Resources/References <h3>Applied Solution Showcase</h3> - Problem Context ➜ Technical Requirements ➜ Architecture ➜ Implementation ➜ Results ➜ Impact ➜ References <h3>Case Study</h3> - Executive Summary ➜ Problem Statement ➜ Methodology ➜ Findings ➜ Impact ➜ References <h3>Technical Product Showcase</h3> - Product Overview ➜ Features ➜ Use Cases ➜ Technical Specs ➜ Usage / Integration Guidelines ➜ References <h3>Solution Implementation Guide</h3> - Overview ➜ Prerequisites ➜ Architecture ➜ Implementation Steps ➜ Security & Monitoring ➜ Troubleshooting ➜ References <h3>Industry Report</h3> - Executive Summary ➜ Industry Analysis ➜ Current State ➜ Trends ➜ Challenges ➜ Recommendations ➜ References <h3>White Paper</h3> - Executive Summary ➜ Problem Analysis ➜ Solution Framework ➜ Implementation Strategy ➜ Recommendations ➜ References <h3>Dataset Contribution</h3> - Overview ➜ Dataset Purpose ➜ Sourcing and Processing ➜ Dataset Stats and Metrics ➜ Usage Instructions ➜ Contact Info ➜ References <h3>Open Source Contribution</h3> - Overview ➜ Purpose ➜ Contribution ➜ Usage ➜ Contact Info ➜ References <h3>Tool/App/Software</h3> - Tool Overview ➜ Features ➜ Installation Instructions ➜ Usage Examples ➜ API Documentation ➜ References--DIVIDER--By following these recommended sections based on your project type, you ensure your content is well-organized and easy to navigate, helping readers quickly find the information most relevant to them. Now, let’s explore ways to further enhance the readability and appeal of your publication.--DIVIDER-- # 4. Assessing Technical Content The technical quality of an AI/ML publication depends heavily on its type. A research paper requires comprehensive methodology and experimental validation, while a tutorial focuses on clear step-by-step instructions and practical implementation. Understanding these differences is crucial for creating high-quality content that meets readers' expectations. **Understanding Assessment Criteria** Refer to the comprehensive bank of assessment criteria specifically for AI/ML publications (detailed in **Appendix A**). These criteria cover various aspects including: - Purpose and objectives definition - Technical depth and methodology - Data handling and documentation - Implementation details - Results and validation - Practical considerations - Educational effectiveness - Industry relevance - Technical asset documentation **Matching Criteria to Publication Types** Different publication types require different combinations of these criteria. For example: - **Research Papers** emphasize originality, methodology, and experimental validation - **Tutorials** focus on prerequisites, step-by-step guidance, and code explanations - **Case Studies** prioritize problem definition, solution impact, and business outcomes - **Technical Deep Dives** concentrate on theoretical foundations and technical accuracy A complete mapping of criteria to publication types is provided in **Appendix B**, serving as a checklist for authors. When writing your publication, refer to the criteria specific to your chosen type to ensure you're meeting all necessary requirements. **Using the Assessment Framework** To create high-quality technical content: 1. **Identify Your Publication Type** - Review the publication types described earlier - Select the type that best matches your content's purpose 2. **Review Relevant Criteria** - Consult Appendix B for criteria specific to your publication type - Use these criteria as a planning checklist before writing 3. **Assess Your Content** - Regularly check your work against the relevant criteria - Ensure you're meeting the requirements, especially those that would be considered essential to the publication type 4. **Iterate and Improve** - Review areas where criteria aren't fully met - Strengthen sections that need more depth or clarity - Refine content until all relevant criteria are satisfied - Polish your work through multiple revisions Remember, these criteria serve as guidelines rather than rigid rules. The goal is to ensure your publication effectively serves its intended purpose and audience. For detailed criteria descriptions and publication-specific requirements, refer to Appendices A and B. **Quality vs. Quantity** Meeting the assessment criteria isn't about increasing length or adding unnecessary complexity. Instead, focus on: - Addressing each relevant criterion thoroughly but concisely - Including only content that serves your publication's purpose - Maintaining appropriate technical depth for your audience - Providing clear value to readers With these technical content fundamentals in place, we can move on to enhancing readability and appeal, which we'll cover in the next section. --DIVIDER-- # 5. Enhancing readability and appeal Creating an engaging publication requires more than just presenting your findings. To capture and maintain your audience's attention, it's essential to structure your content in a visually appealing and easy-to-read format. The following guidelines will help you enhance the readability and overall impact of your publication, making it accessible and compelling to a wide audience. <h2>Attention-Grabbing Title</h2> The title is the first element readers see, so it should be concise and compelling. Aim to communicate the essence of your project in a way that piques curiosity and invites further exploration. Avoid overly technical jargon in the title, but ensure it's descriptive enough to reflect the project's main focus. <h2>Selecting a Hero/Banner Image</h2> A well-chosen banner or hero image helps set the tone for your publication. It should be relevant to your project and visually engaging, drawing attention while providing context. Use high-quality images that align with your content’s theme—whether it's a dataset visualization, a model architecture diagram, or an industry-related image. <h2>Use Headers and Subheaders</h2> Headers and subheaders break up your content into digestible sections, improving readability and making it easier for readers to navigate your publication. Use a consistent hierarchy (e.g., h2 for primary sections, h3 for subsections) to create a clear structure. This also helps readers scan for specific information quickly. <h2>Visuals and Multimedia</h2> Incorporate visuals such as images, diagrams, and videos to complement your text. Multimedia elements can illustrate complex concepts, making your publication more engaging and accessible. Use visuals to break up long sections of text and help readers retain information. <h2>Breaking Text Monotony</h2> Large blocks of text can overwhelm readers. Break up paragraphs with images, bullet points, or callouts. Vary sentence length to keep your content dynamic and engaging. Consider adding whitespace between sections to create breathing room and guide the reader’s eye. <h2>Using Callouts and Info Boxes</h2> Callouts and info boxes help emphasize important points or provide additional context. Use these selectively to highlight key insights or offer helpful tips: :::tip{title="Tip"} - **Tip**: Share helpful advice or shortcuts. ::: :::info{title="Info"} - **Note**: Provide additional information that complements the main text. ::: :::caution{title="Caution"} - **Caution**: Warn readers about potential pitfalls. ::: :::warning{title="Warning"} - **Warning**: Flag critical information or risks. ::: <h2>Use Bullet Points and Numbered Lists (But Don't Overuse Them)</h2> Bullet points and numbered lists are useful for organizing key ideas and steps. However, overusing them can make your publication feel fragmented. Use lists strategically to break down processes or summarize important points, but balance them with regular paragraphs to maintain flow. <h2>Incorporating Charts, Graphs, and Tables</h2> Charts, graphs, and tables are essential for presenting data and results clearly. Ensure they are labeled appropriately, with clear legends and titles. Use them to complement your text, not replace it. Highlight important trends or insights within the accompanying text to help readers understand their significance. <h2>Show Code Snippets, but Avoid Code Dumps</h2> While it’s important to share your methodology, avoid overwhelming readers with large blocks of code. Instead, include code snippets that demonstrate key processes or algorithms, and link to your full codebase via a repository. Below is an example of a useful code snippet to include. It demonstrates a custom loss function that was used in a project: ```python def loss_function(recon_x, x, mu, logvar): BCE = F.binary_cross_entropy(recon_x, x, reduction='sum') KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp()) return BCE + KLD ``` <h2>Highlight Key Findings</h2> Don’t bury your most important insights in lengthy sections. Use bold text, bullet points, or callouts to highlight key findings. Ensure that readers can quickly identify the main contributions or conclusions of your work. <h2>Use a Color Scheme for Charts</h2> Consistent use of colors in charts and graphs helps readers follow trends and comparisons. Pick a color scheme that is visually appealing, easy to read, and, if possible, consistent with your publication’s theme. Avoid overly bright or clashing colors. <h2>Accessibility Considerations</h2> Make your publication accessible to all readers by adopting basic accessibility principles. Use alt text for images, choose legible fonts, and ensure there is sufficient color contrast in your charts. Accessibility improves inclusivity and helps reach a broader audience. <h2> Image Aspect Ratio and Sizes</h2> When including images in your Ready Tensor publication, it’s essential to maintain proper aspect ratios and image sizes to ensure your visuals are clear, engaging, and enhance the overall readability of your project. Here are some best practices for handling image dimensions: 1. **Aspect Ratio** The **aspect ratio** of an image is the proportional relationship between its width and height. Common aspect ratios include: - **4:3**: Suitable for most charts, graphs, and screenshots. - **4:1**: Ideal for hero images at the top of the publication. - **16:9**: Commonly used for wider images, such as landscape photos or infographics. - **1:1**: Ideal for icons, logos, or small visuals that need to appear square. Maintaining a consistent aspect ratio across images in your publication can create a professional and uniform look. Distorted images (those stretched or compressed) can detract from the quality of your presentation, so it’s important to ensure that any resizing preserves the original aspect ratio. 2. **Image Sizes** The size of your images should balance clarity and file size. High-resolution images are critical for presenting details in charts, diagrams, and other visuals, but excessively large files can slow down loading times. Here are some recommendations: - **Resolution**: Use images with at least **72 DPI (dots per inch)** for web display. For high-quality visuals, especially for detailed diagrams or charts, consider using images with **150 DPI or higher**. - **File Size**: To optimize performance, aim for image sizes between **50KB to 200KB** where possible. Compress images without sacrificing quality to reduce file size, using formats like **JPEG** for photos or **PNG** for charts 3. **Maintaining Clarity** - **Avoid pixelation**: If you need to resize an image, make sure it doesn’t become pixelated. Always scale down rather than up to maintain image sharpness. - **Use vector graphics**: For diagrams or illustrations, consider using **SVG** (Scalable Vector Graphics) format. SVG images maintain clarity at any size and are ideal for logos, icons, and simple diagrams. By following these guidelines, you ensure that your images not only look good but also contribute effectively to the storytelling in your project, making it both visually appealing and easy to comprehend for your audience. --DIVIDER--# 6. Summary In this article, we explored the key practices for making your AI and data science projects stand out on Ready Tensor. From structuring your project with clarity to focusing on concepts and results over code, the way you present your work is as important as the technical accomplishments themselves. By utilizing headers, bullet points, and visual elements like graphs and tables, you ensure that your audience can easily follow along, understand your approach, and appreciate your outcomes. Your ability to clearly communicate your project's purpose, methodology, and findings not only enhances its value but also sets you apart in a crowded space. The goal is not just to showcase your skills but to engage your readers, foster collaboration, and open doors to future opportunities. As you wrap up each project, take a moment to reflect on its impact and consider any potential improvements or next steps. With these best practices in mind, your work will not only be technically sound but also compelling and impactful to a wider audience.--DIVIDER--# References - [ReadyTensor's Markdown formatting guide](https://app.readytensor.ai/publications/markdown_for_machine_learning_projects_a_comprehensive_guide_LX9cbIx7mQs9) - [Choose a License](https://choosealicense.com/): A website that explains different open-source licenses and helps users decide which one to pick. - [Unsplash](https://unsplash.com/): A site for royalty-free images. - [Freepik](https://www.freepik.com/): A site for royalty-free images. - [Web Content Accessibility Guidelines (WCAG) Overview](https://www.w3.org/WAI/standards-guidelines/wcag/): Guidelines for making your content accessible on the web.--DIVIDER--# Appendices--DIVIDER--## A. Technical Content Assessment Criteria The following is the comprehensive list of criteria to assess the quality of technical content for AI/ML publications of different types.--DIVIDER--| Criterion Name | Description | | ----------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Clear Purpose and Objectives | Evaluates whether the publication explicitly states its core purpose within the first paragraph or two. | | Specific Objectives | Assesses whether the publication lists specific and concrete objectives that will be addressed. | | Intended Audience/Use Case | Evaluates whether the publication clearly identifies who it's for and how it benefits them. | | Target Audience Definition | Evaluates how well the publication identifies and describes the target audience for the tool, software package, dataset, or product, including user profiles, domains, and use cases. | | Specific Research Questions/Objectives | Assesses whether the publication breaks down its purpose into specific, measurable research questions or objectives that guide the investigation. | | Testability/Verifiability | Assesses whether the research questions and hypotheses can be tested or verified using the proposed approach. Research hypothesis must be falsifiable. | | Problem Definition | Evaluates how well the publication defines and articulates the real-world problem that motivated the AI/ML solution. This includes the problem's scope, impact, and relevance to stakeholders. | | Literature Review Coverage & Currency | Assesses the comprehensiveness and timeliness of literature review of similar works. | | Literature Review Critical Analysis | Evaluates how well the publication analyzes and synthesizes existing work in literature. | | Citation Relevance | Evaluates whether the cited works are relevant and appropriately support the research context. | | Current State Gap Identification | Assesses whether the publication clearly identifies gaps in existing work. | | Context Establishment | Evaluates how well the publication establishes context for the topic covered. | | Methodology Explanation | Evaluates whether the technical methodology is explained clearly and comprehensively, allowing readers to understand the technical approach. | | Step-by-Step Guidance Quality | Evaluates how effectively the publication breaks down complex procedures into clear, logical, and sequential steps that guide readers through the process. The steps should build upon each other in a coherent progression, with each step providing sufficient detail for completion before moving to the next. | | Assumptions Stated | Evaluates whether technical assumptions are clearly stated and explained. | | Solution Approach and Design Decisions | Evaluates whether the overall solution approach and specific design decisions are appropriate and well-justified. This includes explanation of methodology choice, architectural decisions, and implementation choices. Common/standard approaches may need less justification than novel or unconventional choices. | | Experimental Protocol | Assesses whether the publication outlines a clear, high-level approach for conducting the study. | | Study Scope & Boundaries | Evaluates whether the publication clearly defines the boundaries, assumptions, and limitations of the study. | | Evaluation Framework | Assesses whether the publication defines a clear framework for evaluating results. | | Validation Strategy | Evaluates whether the publication outlines a clear approach to validating results. | | Dataset Sources & Collection | Evaluates whether dataset(s) used in the study are properly documented. For existing datasets, proper citation and sourcing is required for each. For new datasets, the collection methodology must be described. For benchmark studies or comparative analyses, all datasets must be properly documented. | | Dataset Description | Assesses whether dataset(s) are comprehensively described, including their characteristics, structure, content, and rationale for selection. For multiple datasets, comparability and relationships should be clear. | | Data Requirements Specification | For implementations requiring data: evaluates whether the publication clearly specifies the data requirements needed. | | Dataset Selection or Creation | Evaluates whether the rationale for dataset selection is explained, or for new datasets, whether the creation methodology is properly documented. | | Datset procesing Methodology | Evaluates whether data processing steps are clearly documented and justified. This includes any preprocessing, missing data handling, anomalies handling, and other data clean-up processing steps. | | Basic Dataset Stats | Evaluates whether the publication provides clear documentation of fundamental dataset properties | | Implementation Details | Assesses whether sufficient implementation details are provided with enough clarity. Focuses on HOW the methodology was implemented. | | Parameters & Configuration | Evaluates whether parameter choices and configuration settings are clearly specified and justified where non-standard. Includes model hyperparameters, system configurations, and any tuning methodology used. | | Experimental Environment | Evaluates whether the computational environment and resources used for the work are clearly specified when relevant. | | Tools, Frameworks, & Services | Documents the key tools, frameworks, 3rd party services used in the implementation when relevant. | | Implementation Considerations | Evaluates coverage of practical aspects of implementing or applying the model, concept, app, or tool described in the publication. | | Deployment Considerations | Evaluates whether the publication adequately discusses deployment requirements, considerations, and challenges for implementing the solution in a production environment. This includes either actual deployment details if deployed, or thorough analysis of deployment requirements if proposed. | | Monitoring and Maintenance Considerations | Evaluates whether the publication discusses how to monitor the solution's performance and maintain its effectiveness over time. This includes monitoring strategies, maintenance requirements, and operational considerations for keeping the solution running optimally. | | Performance Metrics Analysis | Evaluates whether appropriate performance metrics are used and properly analyzed to demonstrate the success or effectiveness of the work. | | Comparative Analysis | Assesses whether results are properly compared against relevant baselines or state-of-the-art alternatives. At least 4 or 5 alternatives are compared with. | | Statistical Analysis | Evaluates whether appropriate statistical methods are used to validate results. | | Key Results | Evaluates whether the main results and outcomes of the research are clearly presented in an understandable way. | | Results Interpretation | Assesses whether results are properly interpreted and their implications explained. | | Solution Impact Assessment | Evaluates how well the publication quantifies and demonstrates the real-world impact and value created by implementing the AI/ML solution. This includes measuring improvements in organizational metrics (cost savings, efficiency gains, productivity), user-centered metrics (satisfaction, adoption, time saved), and where applicable, broader impacts (environmental, societal benefits). The focus is on concrete outcomes and value creation, not technical performance measures. | | Constraints, Boundaries, and Limitations | Evaluates whether the publication clearly defines when and where the work is applicable (boundaries), what constrains its effectiveness (constraints), and what its shortcomings are (limitations). | | Summary of Key Findings | Evaluates whether the main findings and contributions of the work are clearly summarized and their significance explained. | | Significance and Implications of Work | Assesses whether the broader significance and implications of the work are properly discussed. | | Features and Benefits Analysis | Evaluates the clarity and completeness of feature descriptions and their corresponding benefits to users. | | Competitive Differentiation | Evaluates how effectively the publication demonstrates the solution's unique value proposition and advantages compared to alternatives. | | Future Directions | Evaluates whether meaningful future work and research directions are identified. | | Originality of Work | Evaluates whether the work presents an original contribution, meaning work that hasn't been done before. This includes novel analyses, comprehensive comparisons, new methodologies, or new implementations. | | Innovation in Methods/Approaches | Evaluates whether the authors created new methods, algorithms, or applications. This specifically looks for technical innovation, not just original analysis. | | Advancement of Knowledge or Practice | Evaluates how the work advances knowledge or practice, whether through original analysis or innovative methods or implementation. | | Code & Dependencies | Evaluates whether code is available and dependencies are properly documented for reproduction. | | Data Source and Collection | Evaluates whether the publication clearly describes where the data comes from and the strategy for data collection or generation. This criterion only applies if the publication involved sourcing and creation of the data by authors. | | Data Inclusion and Filtering Criteria | Assesses whether the publication defines clear criteria for what data is included or excluded from the dataset | | Dataset Creation Quality Control Methodology | Evaluates the systematic approach to ensuring data quality during collection, generation, and processing | | Dataset Bias and Representation Consideration | Assesses whether potential biases in data collection/generation are identified and addressed. For synthetic or naturally bias-free datasets, clear documentation of why bias is not a concern is sufficient. | | Statistical Characteristics | Assesses whether the publication provides comprehensive statistical information about the dataset | | Dataset Quality Metrics and Indicators | Evaluates whether the publication provides clear metrics and indicators of data quality | | State-of-the-Art Comparisons | Evaluates whether the study includes relevant state-of-the-art methods from recent literature for comparison. Must contain at least 4 or 5 other top methods for comparison | | Benchmarking Method Selection Justification | Evaluates whether the choice of methods, models, or tools for comparison is well-justified and reasonable for the study's objectives. | | Fair Comparison Setup | Assesses whether all methods are compared under fair and consistent conditions. | | Benchmarking Evaluation Rigor | Evaluates whether the comparison uses appropriate metrics and statistical analysis. | | Purpose-Aligned Topic Coverage | Evaluates whether the publication covers all topics and concepts necessary to fulfill its stated purpose, goals, or learning objectives. Coverage should be complete relative to what was promised, rather than exhaustive of the general topic area. | | Clear Prerequisites and Requirements | Evaluates whether the publication clearly states what readers need to have (tools, environment, software) or need to know (technical knowledge, concepts) before they can effectively use or understand the content. Most relevant for educational content like tutorials, guides, and technical implementations, but can also apply to technical deep dives and implementation reports. | | Appropriate Technical Depth | Assesses whether the technical content matches the expected depth for the intended audience and publication type. For technical audiences, evaluates if it provides sufficient depth. For general audiences, evaluates if it maintains accessibility while being technically sound. | | Code Usage Appropriateness | Assesses whether code examples, when present, are used judiciously and add value to the explanation. If the publication type or topic doesn't require code examples, then absence of code is appropriate and should score positively. | | Code Clarity and Presentation | When code examples are present, evaluates whether they are well-written, properly formatted and integrated with the surrounding content. If the publication contains no code examples, this criterion is considered satisfied by default. | | Code Explanation Quality | When code snippets are present, evaluates how well they are explained and contextualized within the content. If the publication contains no code snippets, this criterion is considered satisfied by default. | | Real-World Applications | Assesses whether the publication clearly explains the practical significance, real-world relevance, and potential applications of the topic. This shows readers why the content matters and how it can be applied in practice. | | Limitations and Trade-offs | Assesses whether the content discusses practical limitations, trade-offs, and potential pitfalls in real-world applications. | | Supporting Examples | Evaluates whether educational content (tutorials, guides, blogs, technical deep dives) includes concrete and contemporary examples to illustrate concepts and enhance understanding. Examples should help readers better grasp the material through practical demonstration. | | Industry Insights | Evaluates inclusion of industry trends, statistics, or patterns observed in practice. | | Success/Failure Stories | Assesses whether specific success or failure stories are shared to illustrate outcomes and lessons learned. | | Content Accessibility | Evaluates how well technical concepts are explained for a broader audience while maintaining scientific accuracy. | | Technical Progression | Assesses how well the content builds technical understanding progressively, introducing concepts in a logical sequence that supports comprehension. | | Scientific Clarity | Evaluates whether scientific accuracy is maintained while presenting content in an accessible way. | | Source Credibility | Evaluates whether the publication properly references and cites its sources, clearly identifies the origin of data/code/tools used, and provides sufficient version/environment information for reproducibility. This helps readers validate claims, trace information to original sources, and implement solutions reliably. | | Reader Next Steps | Evaluates whether the publication provides clear guidance on what readers can do after consuming the content. This includes suggested learning paths, topics to explore, further reading materials, skills to practice, or actions to take. The focus is on helping readers understand their potential next steps. | | Uncommon Insights | Evaluates whether the publication provides valuable insights that are either unique (from personal experience/expertise) or uncommon (not easily found in standard sources). Looks for expert analysis, real implementation experiences, or carefully curated information that is valuable but not widely available. | | Technical Asset Access Links | Evaluates whether the publication provides links to access the technical asset (tool, dataset, model, etc.), such as repositories, registries, or download locations | | Installation and Usage Instructions | Evaluates whether the publication provides clear instructions for installing and using the tool, either directly in the publication or through explicit references to external documentation. The key is that a reader should be able to quickly understand how to get started with the tool. | | Performance Characteristics and Requirements | Evaluates documentation of tool's performance characteristics | | Maintenance and Support Status | Evaluates whether the publication clearly communicates the maintenance and support status of the technical asset (tool, dataset, model, etc.) | | Access and Availability Status | Evaluates whether the publication clearly states how the technical asset can be accessed and used by others | | License and Usage Rights of the Technical Asset | Evaluates whether the publication clearly communicates the licensing terms and usage rights of the technical asset itself (not the publication). This includes software licenses for tools, data licenses for datasets, model licenses for AI models, etc. | | Contact Information of Asset Creators | Evaluates whether the publication provides information about how to contact the creators/maintainers or the technical asset or get support, either directly or through clear references to external channels | --DIVIDER--## B. Assessment Criteria Per Project Type --DIVIDER--### B.1 Research Paper | Publication Type | Criterion Name | | ---------------- | ---------------------------------------- | | Research Paper | Clear Purpose and Objectives | | Research Paper | Intended Audience/Use Case | | Research Paper | Specific Research Questions/Objectives | | Research Paper | Testability/Verifiability | | Research Paper | Literature Review Coverage & Currency | | Research Paper | Literature Review Critical Analysis | | Research Paper | Citation Relevance | | Research Paper | Current State Gap Identification | | Research Paper | Context Establishment | | Research Paper | Methodology Explanation | | Research Paper | Assumptions Stated | | Research Paper | Solution Approach and Design Decisions | | Research Paper | Experimental Protocol | | Research Paper | Study Scope & Boundaries | | Research Paper | Evaluation Framework | | Research Paper | Validation Strategy | | Research Paper | Dataset Sources & Collection | | Research Paper | Dataset Description | | Research Paper | Dataset Selection or Creation | | Research Paper | Datset procesing Methodology | | Research Paper | Basic Dataset Stats | | Research Paper | Implementation Details | | Research Paper | Parameters & Configuration | | Research Paper | Experimental Environment | | Research Paper | Tools, Frameworks, & Services | | Research Paper | Implementation Considerations | | Research Paper | Performance Metrics Analysis | | Research Paper | Comparative Analysis | | Research Paper | Statistical Analysis | | Research Paper | Key Results | | Research Paper | Results Interpretation | | Research Paper | Constraints, Boundaries, and Limitations | | Research Paper | Key Findings | | Research Paper | Significance and Implications of Work | | Research Paper | Future Directions | | Research Paper | Originality of Work | | Research Paper | Innovation in Methods/Approaches | | Research Paper | Advancement of Knowledge or Practice | | Research Paper | Code & Dependencies | | Research Paper | Code Usage Appropriateness | | Research Paper | Code Clarity and Presentation |--DIVIDER--### B.2 Benchmark Study | Publication Type | Criterion Name | | ---------------- | ------------------------------------------- | | Benchmark Study | Clear Purpose and Objectives | | Benchmark Study | Intended Audience/Use Case | | Benchmark Study | Specific Research Questions/Objectives | | Benchmark Study | Testability/Verifiability | | Benchmark Study | Literature Review Coverage & Currency | | Benchmark Study | Literature Review Critical Analysis | | Benchmark Study | Citation Relevance | | Benchmark Study | Current State Gap Identification | | Benchmark Study | Context Establishment | | Benchmark Study | Methodology Explanation | | Benchmark Study | Assumptions Stated | | Benchmark Study | Solution Approach and Design Decisions | | Benchmark Study | Experimental Protocol | | Benchmark Study | Study Scope & Boundaries | | Benchmark Study | Evaluation Framework | | Benchmark Study | Validation Strategy | | Benchmark Study | Dataset Sources & Collection | | Benchmark Study | Dataset Description | | Benchmark Study | Dataset Selection or Creation | | Benchmark Study | Datset procesing Methodology | | Benchmark Study | Basic Dataset Stats | | Benchmark Study | Implementation Details | | Benchmark Study | Parameters & Configuration | | Benchmark Study | Experimental Environment | | Benchmark Study | Tools, Frameworks, & Services | | Benchmark Study | Implementation Considerations | | Benchmark Study | Performance Metrics Analysis | | Benchmark Study | Comparative Analysis | | Benchmark Study | Statistical Analysis | | Benchmark Study | Key Results | | Benchmark Study | Results Interpretation | | Benchmark Study | Constraints, Boundaries, and Limitations | | Benchmark Study | Key Findings | | Benchmark Study | Significance and Implications of Work | | Benchmark Study | Future Directions | | Benchmark Study | Originality of Work | | Benchmark Study | Innovation in Methods/Approaches | | Benchmark Study | Advancement of Knowledge or Practice | | Benchmark Study | Code & Dependencies | | Benchmark Study | Benchmarking Method Selection Justification | | Benchmark Study | Fair Comparison Setup | | Benchmark Study | Benchmarking Evaluation Rigor |--DIVIDER--### B.3 Research Summary | Publication Type | Criterion Name | |------------------|----------------------| | Research Summary | Clear Purpose and Objectives | | Research Summary | Specific Objectives | | Research Summary | Intended Audience/Use Case | | Research Summary | Specific Research Questions/Objectives | | Research Summary | Current State Gap Identification | | Research Summary | Context Establishment | | Research Summary | Methodology Explanation | | Research Summary | Solution Approach and Design Decisions | | Research Summary | Experimental Protocol | | Research Summary | Evaluation Framework | | Research Summary | Dataset Sources & Collection | | Research Summary | Dataset Description | | Research Summary | Performance Metrics Analysis | | Research Summary | Comparative Analysis | | Research Summary | Key Results | | Research Summary | Results Interpretation | | Research Summary | Constraints, Boundaries, and Limitations | | Research Summary | Key Findings | | Research Summary | Significance and Implications of Work | | Research Summary | Reader Next Steps | | Research Summary | Originality of Work | | Research Summary | Innovation in Methods/Approaches | | Research Summary | Advancement of Knowledge or Practice | | Research Summary | Industry Insights | | Research Summary | Content Accessibility | | Research Summary | Technical Progression | | Research Summary | Scientific Clarity | | Research Summary | Section Structure |--DIVIDER--### B.4 Tool/App/Software | Publication Type | Criterion Name | |------------------|----------------------| | Tool / App / Software| Clear Purpose and Objectives | | Tool / App / Software| Specific Objectives | | Tool / App / Software| Intended Audience/Use Case | | Tool / App / Software| Clear Prerequisites and Requirements | | Tool / App / Software| Current State Gap Identification | | Tool / App / Software| Context Establishment | | Tool / App / Software| Features and Benefits Analysis | | Tool / App / Software| Tools, Frameworks, & Services | | Tool / App / Software| Implementation Considerations | | Tool / App / Software| Constraints, Boundaries, and Limitations | | Tool / App / Software| Significance and Implications of Work | | Tool / App / Software| Originality of Work | | Tool / App / Software| Innovation in Methods/Approaches | | Tool / App / Software| Advancement of Knowledge or Practice | | Tool / App / Software| Competitive Differentiation | | Tool / App / Software| Real-World Applications | | Tool / App / Software| Source Credibility | | Tool / App / Software| Technical Asset Access Links | | Tool / App / Software| Installation and Usage Instructions | | Tool / App / Software| Performance Characteristics and Requirements | | Tool / App / Software| Maintenance and Support Status | | Tool / App / Software| Access and Availability Status | | Tool / App / Software| License and Usage Rights of the Technical Asset | | Tool / App / Software| Contact Information of Asset Creators |--DIVIDER--### B.5 Dataset Contribution | Publication Type | Criterion Name | |------------------|----------------------| | Dataset Contribution | Clear Purpose and Objectives | | Dataset Contribution | Specific Objectives | | Dataset Contribution | Intended Audience/Use Case | | Dataset Contribution | Current State Gap Identification | | Dataset Contribution | Context Establishment | | Dataset Contribution | Datset procesing Methodology | | Dataset Contribution | Basic Dataset Stats | | Dataset Contribution | Implementation Details | | Dataset Contribution | Tools, Frameworks, & Services | | Dataset Contribution | Constraints, Boundaries, and Limitations | | Dataset Contribution | Key Findings | | Dataset Contribution | Significance and Implications of Work | | Dataset Contribution | Future Directions | | Dataset Contribution | Originality of Work | | Dataset Contribution | Innovation in Methods/Approaches | | Dataset Contribution | Advancement of Knowledge or Practice | | Dataset Contribution | Data Source and Collection | | Dataset Contribution | Data Inclusion and Filtering Criteria | | Dataset Contribution | Dataset Creation Quality Control Methodology | | Dataset Contribution | Dataset Bias and Representation Consideration | | Dataset Contribution | Statistical Characteristics | | Dataset Contribution | Dataset Quality Metrics and Indicators | | Dataset Contribution | Source Credibility | | Dataset Contribution | Technical Asset Access Links | | Dataset Contribution | Maintenance and Support Status | | Dataset Contribution | Access and Availability Status | | Dataset Contribution | License and Usage Rights of the Technical Asset | | Dataset Contribution | Contact Information of Asset Creators | | Dataset Contribution | Section Structure |--DIVIDER--### B.6 Academic Project Showcase | Publication Type | Criterion Name | | ------------------------- | ---------------------------------------- | | Academic Project Showcase | Clear Purpose and Objectives | | Academic Project Showcase | Specific Objectives | | Academic Project Showcase | Context Establishment | | Academic Project Showcase | Methodology Explanation | | Academic Project Showcase | Solution Approach and Design Decisions | | Academic Project Showcase | Evaluation Framework | | Academic Project Showcase | Dataset Sources & Collection | | Academic Project Showcase | Dataset Description | | Academic Project Showcase | Datset procesing Methodology | | Academic Project Showcase | Implementation Details | | Academic Project Showcase | Tools, Frameworks, & Services | | Academic Project Showcase | Performance Metrics Analysis | | Academic Project Showcase | Comparative Analysis | | Academic Project Showcase | Key Results | | Academic Project Showcase | Results Interpretation | | Academic Project Showcase | Constraints, Boundaries, and Limitations | | Academic Project Showcase | Key Findings | | Academic Project Showcase | Future Directions | | Academic Project Showcase | Purpose-Aligned Topic Coverage | | Academic Project Showcase | Appropriate Technical Depth | | Academic Project Showcase | Code Usage Appropriateness | | Academic Project Showcase | Code Clarity and Presentation | | Academic Project Showcase | Code Explanation Quality |--DIVIDER--### B.7 Applied Solution Showcase | Publication Type | Criterion Name | |------------------|----------------------| | Applied Project Showcase | Clear Purpose and Objectives | | Applied Project Showcase | Specific Objectives | | Applied Project Showcase | Current State Gap Identification | | Applied Project Showcase | Context Establishment | | Applied Project Showcase | Methodology Explanation | | Applied Project Showcase | Solution Approach and Design Decisions | | Applied Project Showcase | Evaluation Framework | | Applied Project Showcase | Dataset Sources & Collection | | Applied Project Showcase | Dataset Description | | Applied Project Showcase | Datset procesing Methodology | | Applied Project Showcase | Implementation Details | | Applied Project Showcase | Deployment Considerations | | Applied Project Showcase | Tools, Frameworks, & Services | | Applied Project Showcase | Implementation Considerations | | Applied Project Showcase | Monitoring and Maintenance Considerations | | Applied Project Showcase | Performance Metrics Analysis | | Applied Project Showcase | Comparative Analysis | | Applied Project Showcase | Key Results | | Applied Project Showcase | Results Interpretation | | Applied Project Showcase | Constraints, Boundaries, and Limitations | | Applied Project Showcase | Key Findings | | Applied Project Showcase | Significance and Implications of Work | | Applied Project Showcase | Future Directions | | Applied Project Showcase | Advancement of Knowledge or Practice | | Applied Project Showcase | Purpose-Aligned Topic Coverage | | Applied Project Showcase | Appropriate Technical Depth | | Applied Project Showcase | Code Usage Appropriateness | | Applied Project Showcase | Code Clarity and Presentation | | Applied Project Showcase | Code Explanation Quality | | Applied Project Showcase | Industry Insights | | Applied Project Showcase | Technical Progression | | Applied Project Showcase | Scientific Clarity | | Applied Project Showcase | Source Credibility | | Applied Project Showcase | Uncommon Insights |--DIVIDER--### B.8 Case Study | Publication Type | Criterion Name | |------------------|----------------------| | Case Study | Clear Purpose and Objectives | | Case Study | Specific Objectives | | Case Study | Problem Definition | | Case Study | Current State Gap Identification | | Case Study | Context Establishment | | Case Study | Methodology Explanation | | Case Study | Dataset Sources & Collection | | Case Study | Implementation Details | | Case Study | Performance Metrics Analysis | | Case Study | Key Results | | Case Study | Results Interpretation | | Case Study | Key Findings | | Case Study | Solution Impact Assessment | | Case Study | Significance and Implications of Work | | Case Study | Uncommon Insights |--DIVIDER--### B.9 Industry Product Showcase | Publication Type | Criterion Name | |------------------|----------------------| | Industry Product Showcase | Clear Purpose and Objectives | | Industry Product Showcase | Target Audience Definition | | Industry Product Showcase | Clear Prerequisites and Requirements | | Industry Product Showcase | Problem Definition | | Industry Product Showcase | Current State Gap Identification | | Industry Product Showcase | Context Establishment | | Industry Product Showcase | Deployment Considerations | | Industry Product Showcase | Tools, Frameworks, & Services | | Industry Product Showcase | Implementation Considerations | | Industry Product Showcase | Constraints, Boundaries, and Limitations | | Industry Product Showcase | Significance and Implications of Work | | Industry Product Showcase | Features and Benefits Analysis | | Industry Product Showcase | Competitive Differentiation | | Industry Product Showcase | Originality of Work | | Industry Product Showcase | Innovation in Methods/Approaches | | Industry Product Showcase | Advancement of Knowledge or Practice | | Industry Product Showcase | Real-World Applications | | Industry Product Showcase | Technical Asset Access Links | | Industry Product Showcase | Installation and Usage Instructions | | Industry Product Showcase | Performance Characteristics and Requirements | | Industry Product Showcase | Maintenance and Support Status | | Industry Product Showcase | Access and Availability Status | | Industry Product Showcase | License and Usage Rights of the Technical Asset | | Industry Product Showcase | Contact Information of Asset Creators | --DIVIDER--### B.10 Solution Implementation Guide | Publication Type | Criterion Name | |------------------|----------------------| | Solution Implementation Guide | Clear Purpose and Objectives | | Solution Implementation Guide | Specific Objectives | | Solution Implementation Guide | Intended Audience/Use Case | | Solution Implementation Guide | Problem Definition | | Solution Implementation Guide | Current State Gap Identification | | Solution Implementation Guide | Context Establishment | | Solution Implementation Guide | Clear Prerequisites and Requirements | | Solution Implementation Guide | Step-by-Step Guidance Quality | | Solution Implementation Guide | Data Requirements Specification | | Solution Implementation Guide | Deployment Considerations | | Solution Implementation Guide | Tools, Frameworks, & Services | | Solution Implementation Guide | Implementation Considerations | | Solution Implementation Guide | Significance and Implications of Work | | Solution Implementation Guide | Features and Benefits Analysis | | Solution Implementation Guide | Reader Next Steps | | Solution Implementation Guide | Purpose-Aligned Topic Coverage | | Solution Implementation Guide | Appropriate Technical Depth | | Solution Implementation Guide | Code Usage Appropriateness | | Solution Implementation Guide | Code Clarity and Presentation | | Solution Implementation Guide | Code Explanation Quality | | Solution Implementation Guide | Real-World Applications | | Solution Implementation Guide | Content Accessibility | | Solution Implementation Guide | Technical Progression | | Solution Implementation Guide | Scientific Clarity | | Solution Implementation Guide | Source Credibility | | Solution Implementation Guide | Uncommon Insights | --DIVIDER--### B.11 Technical Deep-Dive | Publication Type | Criterion Name | |------------------|----------------------| | Technical Deep-Dive | Clear Purpose and Objectives | | Technical Deep-Dive | Specific Objectives | | Technical Deep-Dive | Intended Audience/Use Case | | Technical Deep-Dive | Clear Prerequisites and Requirements | | Technical Deep-Dive | Current State Gap Identification | | Technical Deep-Dive | Context Establishment | | Technical Deep-Dive | Methodology Explanation | | Technical Deep-Dive | Assumptions Stated | | Technical Deep-Dive | Solution Approach and Design Decisions | | Technical Deep-Dive | Implementation Considerations | | Technical Deep-Dive | Key Results | | Technical Deep-Dive | Results Interpretation | | Technical Deep-Dive | Constraints, Boundaries, and Limitations | | Technical Deep-Dive | Key Findings | | Technical Deep-Dive | Significance and Implications of Work | | Technical Deep-Dive | Reader Next Steps | | Technical Deep-Dive | Purpose-Aligned Topic Coverage | | Technical Deep-Dive | Appropriate Technical Depth | | Technical Deep-Dive | Code Usage Appropriateness | | Technical Deep-Dive | Code Clarity and Presentation | | Technical Deep-Dive | Code Explanation Quality | | Technical Deep-Dive | Real-World Applications | | Technical Deep-Dive | Supporting Examples | | Technical Deep-Dive | Content Accessibility | | Technical Deep-Dive | Technical Progression | | Technical Deep-Dive | Scientific Clarity | --DIVIDER--### B.12 Technical Guide | Publication Type | Criterion Name | |------------------|----------------------| | Technical Guide | Clear Purpose and Objectives | | Technical Guide | Specific Objectives | | Technical Guide | Intended Audience/Use Case | | Technical Guide | Clear Prerequisites and Requirements | | Technical Guide | Context Establishment | | Technical Guide | Methodology Explanation | | Technical Guide | Implementation Considerations | | Technical Guide | Constraints, Boundaries, and Limitations | | Technical Guide | Key Findings | | Technical Guide | Significance and Implications of Work | | Technical Guide | Reader Next Steps | | Technical Guide | Purpose-Aligned Topic Coverage | | Technical Guide | Appropriate Technical Depth | | Technical Guide | Code Usage Appropriateness | | Technical Guide | Code Clarity and Presentation | | Technical Guide | Code Explanation Quality | | Technical Guide | Real-World Applications | | Technical Guide | Supporting Examples | | Technical Guide | Content Accessibility | | Technical Guide | Technical Progression | | Technical Guide | Scientific Clarity | --DIVIDER--### B.13 Tutorial | Publication Type | Criterion Name | |------------------|----------------------| | Tutorial | Clear Purpose and Objectives | | Tutorial | Specific Objectives | | Tutorial | Intended Audience/Use Case | | Tutorial | Context Establishment | | Tutorial | Clear Prerequisites and Requirements | | Tutorial | Step-by-Step Guidance Quality | | Tutorial | Data Requirements Specification | | Tutorial | Constraints, Boundaries, and Limitations | | Tutorial | Reader Next Steps | | Tutorial | Purpose-Aligned Topic Coverage | | Tutorial | Appropriate Technical Depth | | Tutorial | Code Usage Appropriateness | | Tutorial | Code Clarity and Presentation | | Tutorial | Code Explanation Quality | | Tutorial | Real-World Applications | | Tutorial | Supporting Examples | | Tutorial | Content Accessibility | | Tutorial | Technical Progression | | Tutorial | Scientific Clarity | | Tutorial | Source Credibility | | Tutorial | Uncommon Insights |--DIVIDER--### B.14 Blog | Publication Type | Criterion Name | |------------------|----------------------| | Blog | Clear Purpose and Objectives | | Blog | Context Establishment | | Blog | Purpose-Aligned Topic Coverage | | Blog | Appropriate Technical Depth | | Blog | Real-World Applications | | Blog | Supporting Examples | | Blog | Industry Insights | | Blog | Success/Failure Stories | | Blog | Content Accessibility | | Blog | Source Credibility | | Blog | Reader Next Steps | | Blog | Uncommon Insights |
SHMk0UbaMlcq
ready-tensor
Introduction to knowledge Graphs with Neo4j
![hero.png](hero.png)--DIVIDER--# Introduction Behind every Google search, LinkedIn connection, or Amazon recommendation lies a powerful concept: the knowledge graph. At its core, a knowledge graph represents information as interconnected entities and relationships, mirroring how humans naturally think about and connect information. While traditional databases store data in rigid tables and rows, knowledge graphs create a rich network of relationships that can capture complex real-world connections. Neo4j, as the leading graph database platform, has revolutionized how organizations implement and utilize knowledge graphs. From fraud detection in financial services to recommendation engines in retail, and even enhancing AI systems through integration with Large Language Models, Neo4j provides the foundation for building sophisticated graph-based solutions. Think of a knowledge graph as a digital mirror of relationships in the real world. In a movie database, for instance, rather than having separate tables for films, actors, and directors, a knowledge graph directly connects these entities: Christopher Nolan is connected to "Inception," which is connected to Leonardo DiCaprio, which in turn connects to other films, directors, and co-stars. This web of connections enables powerful queries that would be complex or impossible with traditional databases. The applications of knowledge graphs span across industries and use cases. Social networks use them to understand user connections and suggest new relationships. E-commerce platforms leverage them to provide personalized product recommendations. Healthcare organizations employ them to understand drug interactions and patient relationships. More recently, they've emerged as valuable tools for grounding Large Language Models in factual knowledge, reducing hallucinations and improving response accuracy. This article provides a comprehensive introduction to knowledge graphs using Neo4j. We'll explore the fundamental concepts, learn how to model and query graph data, and examine practical applications across different domains. Whether you're a developer, data scientist, or business analyst, understanding knowledge graphs is becoming increasingly crucial in today's interconnected data landscape.--DIVIDER--# Objectives The main objective of this article is to provide a comprehensive introduction to knowledge graphs and demonstrate their practical application using Neo4j through a citation network analysis. Specific objectives include: - Understand the fundamental concepts of knowledge graphs and their advantages over traditional databases - Learn how to model and query data in Neo4j using the Cypher query language - Analyze a real-world citation network to demonstrate the power of graph databases - Explore complex relationship patterns that would be difficult to implement in traditional databases--DIVIDER--# Prerequisites To follow along with this article, you'll need: Technical Requirements - Neo4j Desktop (version 5.0 or later) - Web browser for Neo4j Browser interface - Basic understanding of database concepts - Familiarity with SQL (helpful but not required) Dataset We'll be using the Citations dataset available in Neo4j, which can be loaded directly through Neo4j Desktop. No additional data preparation is required. Note: All examples use Cypher, Neo4j's query language. While SQL experience is helpful, we'll explain all concepts from the ground up.--DIVIDER--# What is a Knowledge Graph? knowledge graph is a structured representation of data that emphasizes relationships between entities, much like how we naturally connect information in our minds. Unlike traditional relational databases that store data in tables, knowledge graphs use a more flexible and intuitive structure built on two fundamental concepts: nodes (entities) and relationships (edges). Consider how we might represent information about movies. In a knowledge graph: - Nodes represent entities like movies, actors, directors, and genres - Relationships show how these entities connect: actors "ACTED_IN" movies, directors "DIRECTED" films, movies are "IN_GENRE" categories - Properties on both nodes and relationships enrich the information: movies have release dates and ratings, actors have birthdates, and acting relationships might have role names For example: Tom Hanks (node) →[ACTED_IN]→ Forrest Gump (node) Forrest Gump (node) →[IN_GENRE]→ Drama (node) Robert Zemeckis (node) →[DIRECTED]→ Forrest Gump (node) ## Key Components of Knowledge Graphs 1. Nodes (Vertices) - Represent entities or concepts - Can have labels indicating their type (e.g., Person, Movie, Genre) - Contain properties (attributes) describing the entity 2. Relationships (Edges) - Connect nodes to show how entities relate - Are directed (have a start and end node) - Have specific types describing the connection - Can contain properties themselves (e.g., date, weight, role) 3. Properties - Key-value pairs attached to both nodes and relationships - Provide additional context and information - Can be indexed for efficient querying - Examples: name: "Tom Hanks", born: "1956", rating: 4.8 --DIVIDER--## Example Knowledge Graph This example knowledge graph illustrates several key concepts: 1. Node Types (Labels): - Movies with properties: title, year - People (actors/directors) with properties: name, birth year - Genres with properties: name 2. Relationship Types: - ACTED_IN: connects actors to movies - DIRECTED: connects directors to movies - IN_GENRE: connects movies to genres ```mermaid %%{init: {'theme':'default', 'themeVariables': { 'fontSize': '16px'}, "flowchart" : { "nodeSpacing" : 50, "rankSpacing" : 400 }} }%% %% CSS styling for arrows %%{init: { 'theme': 'default', 'themeVariables': { 'edgeLabelBackground':'#ffffff', 'lineColor': '#148011' } }}%% graph LR linkStyle default stroke:#148011,stroke-width:2px %% Movies Matrix[("The Matrix (Movie) 1999")] Speed[("Speed (Movie) 1994")] Point[("Point Break (Movie) 1991")] %% Actors Keanu[("Keanu Reeves (Person) born: 1964")] Sandra[("Sandra Bullock (Person) born: 1964")] Patrick[("Patrick Swayze (Person) 1952-2009")] %% Directors Wachowski[("Lana Wachowski (Person) born: 1965")] Bigelow[("Kathryn Bigelow (Person) born: 1951")] %% Genres Action[("Action (Genre)")] SciFi[("Sci-Fi (Genre)")] %% Relationships - Movies to Actors Keanu -->|ACTED_IN| Matrix Keanu -->|ACTED_IN| Speed Keanu -->|ACTED_IN| Point Sandra -->|ACTED_IN| Speed Patrick -->|ACTED_IN| Point %% Relationships - Directors to Movies Wachowski -->|DIRECTED| Matrix Bigelow -->|DIRECTED| Point %% Relationships - Movies to Genres Matrix -->|IN_GENRE| Action Matrix -->|IN_GENRE| SciFi Speed -->|IN_GENRE| Action Point -->|IN_GENRE| Action ```--DIVIDER--# Why Graph Databases? Traditional databases excel at handling structured, tabular data, but they often struggle when dealing with highly connected information. Graph databases, particularly Neo4j, provide a more natural way to work with interconnected data. Let's explore why organizations are increasingly turning to graph databases for their data needs. **Limitations of Traditional Databases** 1. Relational Databases - Complex JOIN operations for connected data - Performance degrades with relationship depth - Rigid schema that's difficult to modify - Relationships must be inferred through foreign keys - Complex queries become hard to write and maintain 2. Document Databases - No native support for relationships - Data duplication to represent connections - Difficulty in traversing related documents - Limited ability to query across relationships **Advantages of Graph Databases** 1. Natural Data Modeling Instead of tables or documents, data is modeled as it exists in the real world: - People know people - Products have categories - Locations connect to locations - Events involve multiple entities - Documents reference other documents 2. Performance - Relationship traversal in constant time - No need for expensive JOIN operations - Queries maintain performance as data grows - Efficient for deeply connected queries - Index-free adjacency for fast graph operations 3. Real-World Use Cases Financial Services: - Fraud detection through pattern recognition - Risk assessment through relationship analysis - Money laundering detection - Trading networks analysis Healthcare: - Patient journey mapping - Drug interaction networks - Treatment pathway analysis - Research relationship mapping Technology: - Network and IT infrastructure management - Dependency tracking - Impact analysis - Access management --DIVIDER--# Querying a graph Neo4j is the world's leading graph database platform, designed specifically for storing and querying connected data. While traditional relational databases excel at handling structured, tabular data, Neo4j shines when dealing with complex relationships and interconnected information. At its core, Neo4j uses a property graph model where data is stored as nodes (entities) connected by relationships. <h2> Neo4j's Query Language: Cypher</h2> If you're familiar with SQL, you'll find Cypher, Neo4j's query language, refreshingly intuitive. Cypher was designed to be visually explicit, with its syntax pattern matching the graph structures it queries. <br><br> The most basic type of query in cypher is to retrieve all nodes in the database. You can do this by running the following query: ```python MATCH (n) RETURN n ``` This means we want to match any node (we called it `n`) and return all these nodes. You can also specify a specific type of node. For example, you might want to retrieve all nodes of type **Movie**. You can achieve this by running: ```python MATCH (n:Movie) RETURN n ``` This time we are matching against all node of type movie. You can also include relationships in the query. Let's say we want to retrieve all movies that Tom Hanks acted in. We can do this by running: ```python MATCH (actor:Actor) -[:ACTED_IN]-> (movie:Movie) WHERE actor.name = "Tom Hanks" RETURN movie.title AS movie_title ``` In this query we are specifying a relationship between an actor and a movie where the type of the relationship is `ACTED_IN`. Note that the relationship has a direction. It is syntactically correct to write `(actor:Actor) <-[:ACTED_IN]- (movie:Movie)` but this is not going to return anything since it makes no to have a relation of type `ACTED_IN` coming out of a movie Let's compare some common operations in SQL and Cypher: 1. Creating Data **SQL** ```python -- Creating a new movie INSERT INTO Movies (title, released) VALUES ('The Matrix', 1999); -- Creating an actor and linking to movie INSERT INTO Actors (name, born) VALUES ('Keanu Reeves', 1964); INSERT INTO ActedIn (actor_id, movie_id, role) VALUES (1, 1, 'Neo'); ``` **Cypher** ```python // Creating a movie and actor with relationship in one query CREATE (m:Movie {title: 'The Matrix', released: 1999}) CREATE (a:Person {name: 'Keanu Reeves', born: 1964}) CREATE (a)-[:ACTED_IN {role: 'Neo'}]->(m) ``` 2. Querying Data :::tip{title="Tip"} <h3>The Query</h3> Return actors names who acted in movies released after 1990 along with the movie title and year of release. Order the results by the year of release in descending order and display the top 5 most recent movies. ::: **SQL** ```python SELECT a.name, m.title, m.released FROM Actors a JOIN ActedIn ai ON a.id = ai.actor_id JOIN Movies m ON m.id = ai.movie_id WHERE m.released > 1990 ORDER BY m.released DESC LIMIT 5; ``` **Cypher** ```python MATCH (a:Person)-[:ACTED_IN]->(m:Movie) WHERE m.released > 1990 RETURN a.name, m.title, m.released ORDER BY m.released DESC LIMIT 5; ``` --DIVIDER--**Key points about Cypher syntax:** - WHERE clause works similarly to SQL - ORDER BY supports both ASC and DESC - LIMIT works the same way as SQL - Can use aliases with 'as' keyword - Can use aggregation functions (count, collect, etc.) **Key Differences:** - No JOINs needed in Cypher - relationships are first-class citizens - Pattern matching is visual and intuitive - Graph patterns can be expressed more concisely - Complex queries become more readable - No need for junction tables to represent relationships For more information on Cypher, refer to [Neo4j documentation](https://neo4j.com/docs/cypher-cheat-sheet/5/all/)--DIVIDER--# Analyzing Research Impact ## Introduction to the Dataset In this section, we will be using the **citations** dataset provided by **Neo4j**. The dataset represents an academic citation network containing three main entities: - Articles: Research papers with properties like title and publication year - Authors: Researchers who wrote the articles - Venues: Conferences or journals where articles were published The relationships between these entities tell us: - Who wrote which papers (Author-AUTHOR->Article) - Where papers were published (Article-VENUE->Venue) - How papers reference each other (Article-CITED->Article) --DIVIDER--```mermaid %%{init: { 'theme': 'default', 'themeVariables': { 'fontSize': '16px'}, 'flowchart': { 'nodeSpacing': 50, } }}%% graph LR %% Node definitions with properties Article[("(Article) title year")] Author[("(Author) id name")] Venue[("(Venue) id name")] %% Self-referential CITED relationship Article -->|CITED| Article %% Article to Venue relationship Article -->|VENUE| Venue %% Article to Author relationship Author -->|AUTHOR| Article %% Styling classDef default fill:#f9f9f9,stroke:#333,stroke-width:2px; classDef articleClass fill:#f9f9f9,stroke:#333,stroke-width:2px,color:#ff0000; classDef authorClass fill:#f9f9f9,stroke:#333,stroke-width:2px,color:#0000ff; classDef venueClass fill:#f9f9f9,stroke:#333,stroke-width:2px,color:#008000; %% Apply classes to nodes class Article articleClass; class Author authorClass; class Venue venueClass; linkStyle default stroke-width:2px; %% Style specific relationships linkStyle 0 stroke:#ff6b6b,stroke-width:2px; linkStyle 1 stroke:#4834d4,stroke-width:2px; linkStyle 2 stroke:#22a6b3,stroke-width:2px; ```--DIVIDER--## Problem Statement In academic research, understanding the flow of knowledge and identifying influential papers is crucial. Traditional metrics like simple citation counts don't tell the whole story. We'll demonstrate how graph databases can reveal deeper insights about research impact and knowledge propagation through citation networks. **Specific Questions We'll Answer:** 1. Direct Impact Analysis - Which papers are most cited? - Who are the most influential authors? - Which venues have the highest impact? 2. Knowledge Flow Analysis - How do ideas propagate through citation chains? - Which papers serve as bridges between different research areas? - How has citation behavior changed over time? Let's start with basic impact analysis and progressively build more complex queries: --DIVIDER--Which papers are the most cited? ```python MATCH (a:Article)<-[c:CITED]- () RETURN DISTINCT a.title AS Title, a.n_citation AS citation_count ORDER BY citation_count DESC LIMIT 3 ``` | Title | citation_count | | --- | --- | | A method for obtaining digital signatures and public-key cryptosystems | 18861 | | Pastry: Scalable, Decentralized Object Location, and Routing for Large-Scale Peer-to-Peer Systems | 10467 | | Time, clocks, and the ordering of events in a distributed system | 9521 | --DIVIDER--One of the most powerful capabilities of graph databases is their ability to traverse relationships efficiently and find complex patterns. Consider this query: ```python MATCH path = (a1:Article)-[:CITED*2..3]->(a2:Article) WHERE a1.year > 2015 RETURN path LIMIT 10; ``` This query traces how knowledge flows through the citation network by: - Starting from recent articles (published after 2015) - Following citation chains of length 2 to 3 (papers citing papers that cite other papers) - Returning the complete paths of these citation chains While this seems simple in Cypher, implementing the same analysis in a relational database would be extremely challenging: - It would require multiple self-joins on the articles table - Each additional step in the chain would need another join - Performance would degrade significantly as the chain length increases - The SQL query would be complex and hard to maintain Let's see the results of running this query in Neo4j browser: ![query.gif](query.gif)--DIVIDER-- # Limitations While knowledge graphs and Neo4j offer powerful capabilities for handling connected data, several key limitations should be considered: 1. Resource Intensity - Higher memory requirements compared to traditional databases - Performance challenges with large-scale graphs and complex traversals 2. Technical Barriers - Steep learning curve for Cypher query language - Limited availability of tools and expertise compared to traditional databases - Complex data migration from existing relational systems 3. Scalability Challenges - Distributed processing more complex than traditional databases - Performance bottlenecks with highly connected nodes - Real-time updates can be challenging at scale 4. Data Modeling Complexity - Requires careful balance between normalization and performance - Complex decisions around node/relationship granularity - Integration challenges with existing systems These limitations should be evaluated against specific use case requirements when considering a knowledge graph implementation.--DIVIDER--# Conclusion Throughout this article, we explored the power of graph databases through the lens of academic citation networks. Using Neo4j and its query language Cypher, we demonstrated how naturally graph databases handle interconnected data that would be complex to model and query in traditional relational databases. Our exploration of citation networks showcased key advantages of graph databases: - Intuitive data modeling with nodes and relationships - Simple yet powerful queries for complex patterns - Efficient traversal of relationship chains - Natural representation of real-world connections The ability to easily traverse citation patterns through multiple levels demonstrates the elegant simplicity of graph databases compared to the complex joins required in relational databases. While we focused on academic citations, these same principles apply to many domains where relationship analysis is crucial, from social networks to fraud detection. As data becomes increasingly connected, graph databases offer not just a different way to store data, but a more suitable approach for understanding and analyzing relationships within our data. Their ability to efficiently handle complex relationships while maintaining performance makes them an invaluable tool for modern data analysis. --DIVIDER--# References 1. [Neo4j Graph Database System](https://neo4j.com/) 2. [Cypher Documentation](https://neo4j.com/docs/cypher-cheat-sheet/5/all/)--DIVIDER----DIVIDER--
SQpaze1akU6g
ready-tensor
cc-by-sa
CPUs, GPUs, and TPUs: The Hardware Engines Driving AI
![photo-output.JPEG](photo-output.JPEG)--DIVIDER-- # What We Will Cover Welcome to this article on the hardware that powers Artificial Intelligence (AI) and machine learning. As AI continues to evolve, understanding the relationship between algorithms and their associated hardware becomes crucial. This article will provide clarity on the role of different hardware types and guide you in selecting the right computational tools for your machine learning projects. In this article, we cover: - **Introduction: The hardware backbone of AI**: Explore the role of CPUs, GPUs, and TPUs in AI and machine learning. - **Historical Context: The evolution of AI hardware**: Understand the transition of GPUs from graphics to deep learning and other hardware developments. - **Understanding the Basics: A guide to AI hardware**: Learn about the main hardware components used in AI, focusing on their capabilities and limitations. - **Practical Guidance for Data Scientists**: Receive practical advice on choosing the right hardware for your projects. - **Future Landscape and Emerging Technologies**: Look at potential developments in AI hardware, including emerging technologies like IPUs. - **Conclusion and Takeaways**: Reflect on the importance of hardware selection in AI and machine learning. By the end of this article, you will have a clear understanding of the AI hardware landscape, enabling you to make informed decisions for your AI projects. Let's jump right into it and explore the hardware that underpins AI's growth. -----DIVIDER-- # Introduction: The Hardware Backbone of AI In the rapid pace world of Artificial Intelligence (AI) and machine learning, much of the spotlight often shines on groundbreaking algorithms, innovative architectures, and the vast potential of data. Yet, underlying all these advancements is a foundational layer that often goes unnoticed: the hardware that powers these computational tasks. At the heart of this layer are the workhorses: CPUs, the well-known generalists, and their more specialized counterparts (the hardware accelerators like GPUs, TPUs, and the emerging IPUs). But what exactly are hardware accelerators? In essence, they are specialized computational devices designed to expedite specific types of operations, thus "accelerating" tasks that might be inefficient on general-purpose CPUs. As AI models grow in complexity and size, the role of these accelerators becomes more prominent, ensuring tasks are performed efficiently and swiftly. Understanding these components (both the generalist CPUs and specialist accelerators) is similar to a race car driver knowing their vehicle. While the driver's skill is paramount, the vehicle's capabilities often dictate the race's outcome. Similarly, for a data scientist or AI enthusiast, comprehending the strengths and limitations of your computational tools can profoundly influence the efficiency, scalability, and success of your projects. This article aims to unravel these fundamental tools, offering insights into their historical development, inherent strengths, and ideal application scenarios. Whether you're delving deep into neural networks, pondering over the infrastructure of an AI-driven venture, or simply seeking clarity on the ubiquitous tech jargon, this introduction to the backbone of AI's hardware world is crafted for you.--DIVIDER-- # Historical Context: The Evolution of AI Hardware The story of AI's hardware is one of continual evolution, driven by the escalating demands of ever-advancing algorithms and the growing complexity of datasets. **CPUs: The Generalist Workhorse of Computing** Central Processing Units (CPUs) are often deemed the brain of a computer, responsible for executing the instructions of a computer program. Their versatile architecture was designed to handle a plethora of tasks ranging from simple calculations to complex operations. Due to their sequential processing nature, CPUs are adept at handling tasks that require decision-making. As the computing world evolved, multi-core CPUs emerged, enhancing multitasking and parallel processing capabilities to an extent. However, as the complexity and scale of computations, particularly in AI, expanded exponentially, CPUs alone couldn't keep up. They remained indispensable for tasks necessitating sequential processing, but for parallelizable tasks, other hardware accelerators started taking the center stage. **GPUs: From Gaming to Deep Learning** Graphics Processing Units (GPUs) initially carved out their niche in the gaming industry, where their architecture excelled at rendering graphics and managing multiple operations simultaneously. It was later discovered that their architecture, which consists of many small cores capable of performing similar tasks in parallel, is also well-suited for a variety of computational tasks outside of graphics. This led to the development and popularization of GPGPU (General-Purpose computing on Graphics Processing Units). The transformative moment for GPUs in AI came with the [AlexNet paper](https://papers.nips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf) in 2012. Researchers harnessed the parallel processing power of GPUs to significantly accelerate deep learning computations, marking a seismic shift in hardware preferences for the AI community. Today, GPUs are used as hardware accelerators for a wide range of applications, from machine learning and scientific simulations to data analytics. They excel in situations where tasks can be parallelized, meaning the same operation can be performed simultaneously on different sets of data. By offloading suitable tasks from the CPU to the GPU, substantial speedups in computation can be achieved. **TPUs: Google's Answer to AI's Computational Demands** Tensor Processing Units (TPUs) emerged as Google's dedicated hardware accelerators for machine learning endeavors. Recognizing the increasing demands of neural network-based computations, Google tailored TPUs to optimize matrix operations, a foundation of deep learning algorithms. While GPUs possess a versatile architecture suitable for an array of tasks, TPUs stand out with their specialized focus, enhancing specific operations prevalent in machine learning. This specialization has solidified TPUs' position within Google's core services and the broader Google Cloud infrastructure. Their architecture, particularly the systolic array design, streamlines data flow, reducing the need for constant memory fetches and boosting operation speeds. In the AI hardware realm, TPUs underscore the trend towards specialization, ensuring optimal performance for specific tasks.--DIVIDER--:::info{title="Systolic Arrays: A Closer Look"} A systolic array is a specialized hardware architecture used in certain computer designs, notably in some of the TPUs developed by Google. The name "systolic" is inspired by the rhythmic contractions of the heart (systole), which push blood through the circulatory system. In a similar fashion, a systolic array processes data by "pumping" it through a network of processors in a coordinated, rhythmic manner. Here are the key characteristics of a systolic array: 1. **Parallelism**: It's a matrix of processors where each processor is connected to its neighbors, much like cells in a matrix. 2. **Data Movement**: Data flows between processors in a coordinated manner, often in lockstep. Once the data is input to the array, it flows through the processors and is acted upon at each step, until it reaches the end of the array. 3. **Efficiency**: Because data moves directly between adjacent processors, there's often a reduction in the need for costly memory accesses, leading to faster computation times and reduced power consumption. 4. **Specialization**: Systolic arrays are especially efficient for specific types of operations, such as matrix multiplications commonly used in deep learning algorithms. In the context of TPUs, the systolic array design is a major factor behind their efficiency, particularly for large-scale matrix operations that are common in neural network computations. By minimizing memory fetches and maximizing parallelism, systolic arrays allow TPUs to achieve high performance for specific machine learning tasks. :::--DIVIDER-- **The Ripple Effect of Hardware Advancements** The progression from CPUs to GPUs and TPUs, isn't just a chronology of more powerful tools. It mirrors the growth and evolution of AI itself. As each hardware innovation emerged, it unlocked new possibilities in AI, enabling more complex models, faster training times, and broader applications. Reflecting on this history reminds us that AI is as much about the tools we use as the algorithms we design. The synergy between software and hardware advancements propels the field forward, setting the stage for the innovations of tomorrow. --DIVIDER--## Understanding the Basics: Hardware Essentials ![c2d93990-e10d-4cf2-bf7f-81a4b41218e3.JPG](c2d93990-e10d-4cf2-bf7f-81a4b41218e3.JPG) When it comes to AI and deep learning, the choices of hardware can profoundly influence outcomes. Each type has its strengths, constraints, and optimal use cases. To make informed decisions, we need to understand the basics of these computational powerhouses. ## CPU (Central Processing Unit) - **Basics**: The CPU is the primary processor of a computer; it handles a broad range of tasks and orchestrates the operation of other components. In AI, the CPU manages tasks that require complex decision-making and processes that are not easily parallelized. - **Size and Architecture**: CPUs usually have fewer cores optimized for sequential serial processing, but modern architectures like AMD’s Ryzen or Intel’s Alder Lake include innovations like hybrid core designs, blending high-performance and efficiency cores. They are compact, fitting into most computer setups with ease. Their design caters to tasks that require complex decision-making and swift individual task execution. - **Monetary Cost**: The prices of CPUs range based on performance, with options from affordable to high-end (e.g., Intel Core i9, AMD Ryzen 9), with the cost typically scaling with performance and added features like support for advanced technologies (e.g., Intel’s Hyper-Threading or AMD’s Precision Boost). - **Energy Consumption**: Generally, CPUs have a balanced energy profile. They are designed for a variety of tasks and their consumption will peak under heavy loads. While efficient for general tasks, data-intensive computations like neural network training can lead to prolonged high energy usage. - **Advantages**: CPUs are versatile in handling a variety of tasks, efficient at complex decision-making tasks, and optimized for sequential execution. Ideal for general-purpose computing and handling non-parallelizable workloads. - **Limitations**: CPUs are not optimized for highly parallel tasks, making them less efficient than GPUs and TPUs for AI workloads, especially for large-scale matrix computations in deep learning. - **Use Cases**: General-purpose computing, systems operations. ## GPU (Graphics Processing Unit) - **Basics**: Originally for graphics rendering, GPUs consist of thousands of smaller cores designed for parallel processing tasks, making them essential for deep learning and other AI applications. Companies like Nvidia and AMD lead the charge in this space, with GPUs now designed to handle complex AI workloads alongside traditional graphical tasks. - **Size and Architecture**: GPUs boast hundreds to thousands of smaller cores tailored for parallel processing. Each core, while simpler than a CPU core, collaborates to handle tasks that can be broken down and processed simultaneously. High-performance GPUs are often larger, requiring advanced cooling mechanisms. The architecture of modern GPUs, like Nvidia’s A100 or the upcoming H100, includes advancements like Tensor Cores that accelerate deep learning tasks. - **Monetary Cost**: They can be quite pricey, especially models fine-tuned for high-end gaming or AI tasks. High-performance GPUs like Nvidia’s RTX 4090 and AMD’s Radeon RX 7900 XTX are on the premium end, with prices ranging from several hundred to thousands of dollars depending on performance. However, more mainstream GPUs are becoming increasingly affordable. - **Energy Consumption**: High, especially under heavy computational loads. High-end models such as the Nvidia RTX 4090 are designed with robust cooling solutions, and their power consumption can exceed 450 watts, making energy efficiency a consideration for large-scale AI projects. - **Advantages**: GPUs excel in parallelizable tasks, significantly accelerating operations like matrix multiplications essential in deep learning. The latest 3-nanometer chips, such as Nvidia's upcoming Hopper series, are pushing the boundaries of AI model training and inference, offering faster data throughput and better performance per watt. - **Limitations**: It is a waste of resources for tasks that aren't parallelizable because it often requires specialized coding. High-end GPUs may be excessive for simpler tasks and demand additional optimization. Their high power consumption adds to inefficiency in energy-intensive operations. - **Use Cases**: Graphics rendering, deep learning model training, and other parallelizable operations. ## TPU (Tensor Processing Unit) - **Basics**: TPUs were specially developed by Google for TensorFlow and are optimized for machine learning operations. They are specially designed to handle operations such as matrix multiplications and are deployed in Google Cloud for scalable AI processing. - **Size and Architecture**: TPUs are specialized ASICs (Application Specific Integrated Circuits) designed primarily for matrix operations fundamental in deep learning. They streamline specific operations in harmony with other units. The architecture is highly parallel, using thousands of smaller cores tailored to AI tasks. Such as Google’s TPU v4, offer extreme performance for tensor operations. - **Monetary Cost**: TPUs are often available through Google Cloud, where they operate on a pay-as-you-go basis. While not available for purchase as standalone chips, they are cost-effective in cloud environments for businesses and researchers conducting large-scale machine learning tasks. - **Energy Consumption**: They are optimized for specific tasks, often more efficient than GPUs for TensorFlow-related operations. This makes them a more energy-efficient choice for AI model training and inference at scale. - **Advantages**: Highly optimized for specific neural network computations, leading to increased efficiency and speed for compatible tasks. Their efficiency and scalability make them a popular choice for cloud-based AI operations. - **Limitations**: TPUs are less versatile than CPUs and GPUs due to their specialized nature. Mainly optimized for TensorFlow, although things have been changing. TPUs are best suited for TensorFlow-based AI projects and are increasingly being integrated into Google Cloud services. - **Use Cases**: Deep learning model training and inference, especially in environments leveraging TensorFlow.--DIVIDER-- # Practical Guidance for Data Scientists Selecting the right hardware for your AI and machine learning projects can feel like navigating a maze. As data scientists, we strive to maximize model performance while ensuring we don’t overspend on resources or energy. Here are some tailored considerations and advice to guide your decisions: --DIVIDER-- ## Overview of Best Practices **1. Understand Your Model's Needs**: Different models and tasks have distinct computational requirements. A simple linear regression will have vastly different demands than a large neural network. Before committing to hardware, evaluate the model’s complexity, data volume, and expected processing times. This will go a long way towards ensuring that you choose the most efficient and cost-effective hardware. Matching your hardware to your model’s needs prevents wasted resources, ensures optimal performance, and avoids unnecessary expenses on overpowered or underutilized systems. **2. Training vs. Inference**: Training a model typically requires more computational power than inference. While GPUs or TPUs might be ideal for training, CPUs or specialized edge devices might suffice for deployment and inference, especially in real-time applications. **3. Parallelism Opportunities**: If your models support parallel processing (like deep learning models do), lean towards GPUs or TPUs. Their architecture is specifically designed for this kind of task. However, if your workloads are more sequential or if you're working on traditional machine learning models, CPUs might be more appropriate. **4. Budget Considerations**: Always weigh the computational gains against costs. It might not always be feasible or necessary to invest in the most advanced hardware. Cloud platforms offer flexible pricing models, allowing for on-demand access to advanced hardware without upfront investments. **5. Ecosystem and Compatibility**: Ensure that the tools, libraries, and frameworks you rely on are compatible with your chosen hardware. While TPUs might offer performance boosts, they are primarily optimized for TensorFlow. If your stack is based on another framework, a GPU might be a better fit. **6. Future-Proofing**: When making long-term hardware decisions, consider the direction in which the AI and machine learning fields are moving. Emerging algorithms, tools, and best practices might change the landscape, so it’s wise to have hardware that can adapt to these shifts. **7. Environmental Impact**: In an age of increasing environmental consciousness, consider the energy consumption of your hardware choices. Optimizing energy use is not only cost-effective but also contributes to sustainable and eco-friendly practices. **8. Experiment and Iterate**: Lastly, don’t hesitate to experiment. Benchmarks and theoretical knowledge are useful, but real-world testing will give you the most accurate insight into how a particular piece of hardware will perform for your specific needs. If possible, conduct pilot tests on different hardware platforms to gauge performance. ### Practical Guidance Examples 1. **Tabular Data Models**: For traditional ML models like regressions or tree-based models on tabular data, CPUs are typically sufficient. 2. **Simple Dense Neural Networks**: These can be efficiently trained on CPUs, but for faster performance, especially with larger networks, GPUs can provide a significant boost. 3. **Convolutional Neural Networks (CNNs)**: Given the parallel nature of their operations, GPUs are the gold standard for training and deploying CNNs. 4. **Transformers**: While smaller transformer models can be trained on GPUs 16GB+ VRAM., TPUs might be a better choice for larger models because of their matrix multiplication optimizations. 5. **Large Language Models (LLMs)**: TPUs are the preferred choice for training these models, though distributed training across multiple GPUs can also be an option, especially for fine-tuning on specific tasks.--DIVIDER-- :::info ### 📘 PyTorch and TPUs: A Brief Overview While TensorFlow and TPUs (both from Google's umbrella) traditionally shared a more integrated relationship, it's entirely possible and increasingly common to run PyTorch models on TPUs. Thanks to collaborative efforts between Google and PyTorch developers, a bridge has been built for this exact purpose; **PyTorch/XLA**. **Key Points**: 1. **Library Integration**: PyTorch/XLA is a specialized library that allows PyTorch to harness the power of TPUs, taking advantage of the Accelerated Linear Algebra (XLA) compiler. 2. **Device Handling**: Just as you'd move PyTorch tensors between CPU and GPU with `to()`, with PyTorch/XLA, you'll use a new device type: `xla`. Models and data can be transferred to the TPU using this device reference. 3. **Optimized Operations**: While you can still use standard PyTorch optimizers, for optimal performance on TPUs, it's recommended to employ TPU-optimized variants provided by PyTorch/XLA. 4. **Distributed Training**: To fully utilize TPUs and their multiple cores, consider distributed training. PyTorch/XLA offers utilities for this, allowing efficient parallel processing across TPU cores. **Resources**: If you're keen on diving into TPU-powered PyTorch projects, the [PyTorch/XLA documentation](https://pytorch.org/xla/release/1.5/index.html) provides comprehensive guides, tutorials, and troubleshooting tips. :::--DIVIDER-- ## Guidance on Fine-Tuning Large Large Models (LLMs) ![8e21854d57d5cd9650a71b1858d09b8553c81981-2048x1152.webp](8e21854d57d5cd9650a71b1858d09b8553c81981-2048x1152.webp) Fine-tuning Large Language Models (LLMs) is a task that demands significant computational resources. However, recent advances in fine-tuning techniques have made this process more efficient and accessible. Here are some tailored pointers: 1. **Parameter-Efficient Fine-Tuning(PEFT)**: Instead of fine-tuning all model parameters, PEFT techniques like LoRA(Low-Rank Adaptation), Adapters, and Prompt-Tuning adjusts a small subset of the parameter, significantly reducing the computational and memory requirements. Some parameters: - LoRA: Introduces low-rank matrices to reduce the number of trainable parameters. - Prefix-Tuning: Adds tunable prefixes to transformer layers rather than modifying the entire model. - Prompt-Tuning: Optimizes soft prompts to guide model behavior without changing the underlaying parameters. - BitFit: Fine-tunes only the bias terms of the model, further reducing computational overhead. - IA3 (Infused Adapter-Tuning): Introduces lightweight adapters that control activation scaling within transformer layers. - QLoRA (Quantized LoRA): Extends LoRA by allowing fine-tuning of quantized models, reducing memory usage significantly. 2. **Distributed Training**: Given the sheer size of LLMs, distributed training across multiple GPUs or TPUs is often necessary. Tools like NVIDIA's NCCL or TensorFlow's `tf.distribute.MirroredStrategy` can aid in this. 3. **Memory Management**: LLMs can easily exceed the memory of single GPUs. Techniques such as gradient accumulation or model parallelism can help mitigate memory-related issues. - Gradient Accumulation: Reduces memory consumption by accumulating gradients over multiple forward passes. - Model Parallelism: Splitting the model across multiple GPUs using techniques like Megatron-LM and DeepSpeed ZeRO optimizations. 4. **Hardware Choice**: Choosing the right hardware is crucial for optimizing training efficiency: 1. Large-Scale LLMs (Full Fine-Tuning) - TPUs are specially optimized for the matrix multiplications typical of transformer architectures (the backbone of LLMs). If available, they can greatly speed up training and are often considered the best choice for large-scale training. - NVIDIA A100, H100, or multiple RTX 4090 GPUs with large VRAM are preferred for distributed GPU training. - AWS, Azure, and Google Cloud offer instances optimized for large-scale model training (e.g., NVIDIA DGX Cloud). 2. PEFT-Based Fine-Tuning - Since PEFT techniques fine-tune only a small portion of the model, they can be executed on consumer-grade GPUs like RTX 3090, 4090, or A6000. - LoRA and Adapter fine-tuning can often be performed efficiently on a single GPU with 24GB+ VRAM. 3. Smaller Models & Inference Optimization - Dataset Size: Unlike initial LLM pretraining, fine-tuning usually requires significantly smaller datasets, typically in the range of hundreds of thousands of samples rather than billions. - Data Cleaning & Augmentation: Ensuring high-quality, diverse, and domain-specific data improves fine-tuning outcomes. - Synthetic Data Generation: When real-world data is scarce, synthetic data generation using self-distillation or data augmentation techniques can be useful. 5. **Training Data**: Dataset size for fine-tuning is significantly smaller than the initial pretraining phase, typically requiring hundreds of thousands of samples instead of billions. Ensuring high-quality, diverse, and domain-specific data improves fine-tuning outcomes. When real-world data is scarce, synthetic data generation using self-distillation or data augmentation techniques can be useful.--DIVIDER-- # Future Landscape and Emerging Technologies As Artificial Intelligence (AI) and machine learning continue to expand, so does the hardware that underpins them. The drive to make computations faster, more efficient, and more sustainable is never-ending. This section casts an eye on what the horizon holds for hardware in the AI domain. **1. Quantum Computing**: Arguably the most anticipated technological leap, quantum computers use quantum bits (qubits) to perform computations. Unlike traditional bits that are either 0s or 1s, qubits can be both simultaneously. This superposition property can revolutionize the speed and efficiency of complex computations, potentially dwarfing the capabilities of our current hardware. Quantum computing has the potential to revolutionize machine learning by processing vast amounts of data faster and discovering patterns that classical computers struggle to identify. Quantum-enhanced neural networks and quantum feature mapping could lead to breakthroughs in areas like natural language processing, image recognition, and predictive analytics. Quantum computing is advancing rapidly, with significant developments from leading technology companies and research institutions. Here are some latest updates: - Microsoft recently introduced its Majorana 1 quantum chip, which uses a special type of particle called Majorana fermions to build more stable qubits. This breakthrough could make quantum systems less prone to errors and bring us closer to building practical, large-scale quantum computers. - Google plans to launch commercial quantum applications within five years, targeting industries like materials science and pharmaceuticals. Their new 105-qubit "Willow" processor demonstrated record-breaking computational capabilities. - QuEra Computing introduced "Aquila," a 256-qubit neutral-atom quantum computer, enhancing programmable quantum simulations. **2. Neuromorphic Computing**: Inspired by the human brain, neuromorphic chips aim to mimic the way neurons and synapses function. These chips could pave the way for extremely power-efficient, fast, and adaptive machine learning systems. A key feature of neuromorphic computing is the use of spiking neural networks (SNNs), which function similarly to biological neurons by firing only when necessary. This event-driven processing significantly reduces power consumption compared to conventional AI accelerators like GPUs. Recent advancements include Intel’s Loihi 2, IBM’s memristor-based AI research, and the European Human Brain Project’s progress in neuromorphic hardware. These innovations enhance AI, robotics, IoT, and brain-computer interfaces, making neuromorphic systems ideal for adaptive, low-power computing. **3. Intelligence Processing Units (IPUs)**: A relative newcomer to the AI hardware scene, IPUs are specifically tailored for the demands of AI workloads. Unlike general-purpose processors, IPUs have many small cores and in-memory computation, offering faster processing and better efficiency for AI tasks. Developed by companies like Graphcore, these chips optimize for the sparse nature of neural network computations. With a unique architecture emphasizing a vast number of small cores and in-memory computation, IPUs promise significant speedups for specific AI tasks. As their ecosystem grows, IPUs might emerge as a major contender in the AI hardware spectrum. Graphcore’s IPUs continue to gain traction in AI research and industry. They are being adopted by major tech companies for large-scale machine learning tasks, offering significant performance improvements over traditional GPUs. **4. Domain-Specific Architectures**: The future may see a shift from general-purpose hardware like GPUs to more specialized, domain-specific architectures. These would be tailored for specific AI tasks, ensuring that every ounce of computational power is optimized for its intended purpose. Nvidia's Tensor Cores and Google’s Tensor Processing Units (TPUs) are examples of domain-specific chips that optimize deep learning tasks, achieving faster and more efficient AI model training and inference. **5. Advancements in Memory Technology**: Storing and retrieving data swiftly is as crucial as the computation itself. New memory technologies like MRAM (Magnetoresistive Random Access Memory) promise faster access times, lower power consumption, and higher durability. MRAM is seeing increased adoption in AI hardware as companies like Intel and Samsung work to integrate it into next-gen memory solutions. The transition from traditional DRAM to MRAM promises significant improvements in data speed and energy efficiency. **6. Edge Computing and AI**: With the proliferation of IoT devices, there's a drive to process data closer to where it's generated rather than in a centralized data centre, and this is known as edge computing. The future might see more powerful AI-capable chips embedded in everyday devices, enabling real-time processing without the need for cloud connectivity. Companies like Nvidia, Intel, and Qualcomm are developing AI chips specifically for edge computing, such as the Nvidia Jetson platform, which allows devices like drones and robots to perform real-time AI processing without needing cloud resources. **7. Greener Technologies**: With the environmental impact of computing becoming a central concern, future technologies will be geared towards sustainability. This encompasses energy-efficient chips, sustainable manufacturing processes, and hardware that has a reduced carbon footprint. Several companies are incorporating sustainability into their hardware designs. For example, Nvidia’s GPUs are designed with lower power consumption in mind, and companies like AMD are working on reducing the carbon footprint of their manufacturing processes. **8. Open Hardware Movement**: Mirroring the open-source software movement, there's growing momentum around open hardware. Such initiatives could democratize access to advanced hardware technologies, enabling a more diverse group of innovators to contribute to the AI revolution. Open hardware platforms like RISC-V are gaining popularity, offering an open-source alternative to proprietary chip designs. This movement is expected to drive innovation by enabling a broader range of innovators to contribute to the advancement of AI hardware. **9. Semiconductor Process Innovations**: The race to make computer chips smaller and more powerful has led to **5-nanometer (nm) chips** becoming the latest standard for training advanced AI models like GPT-4. The next generation of chips is already aiming for sizes smaller than 2nm. To put this into perspective, the SARS-CoV-2 coronavirus, which caused the COVID-19 pandemic, is about 50–150nm in size. These tiny chips pack in more transistors than ever before, making AI systems faster, more efficient, and a true marvel of modern engineering. --DIVIDER-- # Conclusion The progress of AI is closely tied to the hardware that drives it. From the foundational role of CPUs to the emerging presence of IPUs and the potential of quantum and neuromorphic computing, having a grasp on the strengths and limitations of these tools is essential for AI practitioners. Armed with this understanding, we can make choices that boost the performance and impact of our projects. As technology continues to advance, staying informed and flexible will be crucial in the rapidly changing realm of AI.--DIVIDER-- # References 1. [ImageNet classification with deep convolutional neural networks](https://papers.nips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf) - A seminal paper by Krizhevsky, Sutskever, and Hinton that propelled GPUs into the AI limelight. 2. [What is GPU Computing?](https://developer.nvidia.com/what-is-gpu-computing) - An introduction to GPU computing by Nvidia Developer. 3. [Introduction to TPUs](https://cloud.google.com/tpu/docs/tpus) - Google Cloud's comprehensive guide on Tensor Processing Units. 4. [IPU Architecture](https://www.graphcore.ai/products/ipu) - Product page for Graphcore's Intelligence Processing Unit (IPU). 5. [PyTorch/XLA documentation](https://pytorch.org/xla/release/1.5/index.html) - Official documentation from PyTorch on utilizing TPUs. 6. [Open-source hardware](https://en.wikipedia.org/wiki/Open-source_hardware) - Wikipedia's overview of the open-source hardware movement. 7. [Neuromorphic computing](https://www.intel.com/content/www/us/en/research/neuromorphic-computing.html) - Intel's overview of neuromorphic computing.
SW9KU4DapFrs
mo.abdelhamid
cc-by-sa
Hyper-Parameter Tuning (HPT) Using Optuna
![hero.png](hero_hpt.png)--DIVIDER--# Overview Welcome to our publication on integrating hyperparameter tuning into our reusable machine learning models. In this publication, we'll leverage Optuna to optimize the performance of our existing Random Forest time step classifier model. Here's a brief outline of what we'll cover: - Introduction to Hyperparameter Tuning: Discussing the concept of hyperparameter tuning and its importance in machine learning. - Bayesian Optimization for Hyperparameter Tuning: Discussing the concept of Bayesian optimization and its use in hyperparameter tuning. - Introducing Optuna: Introducing Optuna, the library we'll use for hyperparameter tuning, and discussing why we chose it. - Implementing Hyperparameter Tuning for our Model: Providing a step-by-step walkthrough on how to define hyperparameters and apply Optuna to our Random Forest model. - Best Practices for Hyperparameter Tuning: Reviewing the best practices for hyperparameter tuning your machine learning models. By the end of this tutorial, you'll have a solid understanding of how to enhance your machine learning model's performance through hyperparameter tuning, adding another layer of adaptability to your model implementations. Let's get started! --DIVIDER--Code Repository The concepts in this publication are implemented in this repository: https://github.com/readytensor/rt_tspc_random_forest_sklearn--DIVIDER--# Introduction to Hyperparameter Tuning In the world of machine learning, hyperparameters critically influence a model's performance. These are parameters set before the learning process starts, and finding the right combination can be a complex task. This is where hyperparameter tuning comes in, with several approaches available, each with their pros and cons. These include manual search, grid search, and random search. Manual search, as the name suggests, involves manually adjusting hyperparameters and observing the results. This approach is highly time-consuming and requires a solid understanding of how different hyperparameters affect the learning algorithm. Grid search and random search are more systematic. Grid search involves defining a set of possible values for each hyperparameter and systematically testing all combinations. This method is exhaustive and guarantees finding the best combination within the defined grid but can be computationally expensive. On the other hand, random search randomly selects combinations of hyperparameters to test, which can be more efficient but may miss the optimal combination. In this tutorial, we opt for a more sophisticated method: Bayesian optimization using Gaussian Processes, implemented through the Optuna library. Bayesian optimization provides a principled technique based on information from past evaluations to find the minimum of a function. We chose Bayesian optimization because it can find better results with fewer function evaluations compared to grid search and random search. It's more suitable for high-dimensional hyperparameter spaces and scenarios where function evaluations are costly, which is typical in machine learning model training. By the end of this tutorial, you'll understand how to perform hyperparameter tuning using Optuna and integrate it into your machine learning projects for better model performance. Let's dive in!--DIVIDER--# Bayesian Optimization for HPT Hyperparameter tuning is a process in machine learning that essentially boils down to an optimization problem. Here, we're attempting to discover the ideal hyperparameter values that maximize our model's performance. The objective function, which we aim to minimize, in this context is our model's validation error. The variables we manipulate to achieve this are the hyperparameters. The goal, therefore, is to find those hyperparameter values that lead to the minimum validation error. However, this process isn't straightforward. Training machine learning models is a resource-intensive task that requires significant computational power and time, particularly when dealing with models that have numerous hyperparameters. Given these constraints, it's desirable to reduce the number of objective function evaluations while still arriving at a satisfactory solution. This is precisely where Bayesian Optimization becomes invaluable. In essence, Bayesian Optimization is designed to locate the minimum of a function with as few iterations as possible. It works by constructing a posterior distribution of functions, typically using Gaussian processes, that serves as the best approximation of the function we're trying to optimize. As we gather more observations, this posterior distribution becomes increasingly refined, aiding the model in determining which regions of the hyperparameter space are worth exploring versus those that aren't. Thus, the optimization process is a delicate balancing act between exploration, where we sample areas of high uncertainty, and exploitation, where we sample areas estimated to have a good performance. In practical terms, this means we strive to locate the optimal values with the fewest steps possible. This efficiency is achieved by employing a surrogate optimization problem - in this case, finding the maximum of the acquisition function - which is substantially cheaper to evaluate compared to the original optimization problem. Bayesian Optimization involves the following two core components: 1 - Surrogate model 2 - Acquisition function Let's review each of these components in more detail. --DIVIDER--## Surrogate Model The surrogate model plays a crucial role in Bayesian optimization and it's important to understand it. A surrogate model is an approximation of the objective function that is cheaper to evaluate. It is built based on the evaluations of the objective function at previously sampled points. The surrogate model not only provides an estimate of the objective function at any point in the hyperparameter space but also quantifies the uncertainty of this estimate. Areas of the hyperparameter space that have not been sampled much will have higher uncertainty. In Bayesian optimization, a Gaussian Process is often used as the surrogate model because it provides a measure of uncertainty for the function estimate. Other surrogate models include Random Forests and Gradient Boosting Machines. The surrogate model is used to select the next point to evaluate in the hyperparameter space. This is done through the acquisition function, which takes into account both the estimate of the objective function from the surrogate model and the associated uncertainty. In conclusion, the surrogate model is a critical component of the Bayesian optimization process. It provides a balance between exploration (sampling in areas of high uncertainty) and exploitation (sampling in areas estimated to have good performance) that is key to the efficiency of Bayesian optimization.--DIVIDER--## Acquisition Function The acquisition function is used by the Bayesian optimization process to decide where to sample next. It trades off exploitation (sampling where the surrogate model predicts a good objective) and exploration (sampling in areas of high uncertainty). We will be using the Expected Improvement (EI) function which is a commonly used function in Bayesian optimization, including in hyperparameter tuning. The EI function provides a balance between exploration and exploitation. At each step, the point with the highest expected improvement is chosen as the next point to evaluate. In simple terms, EI gives a score to every point in the hyperparameter space. This score is high if the point is expected to improve upon the current best hyperparameter configuration (exploitation), and also if the uncertainty at that point is high (exploration). Thus, EI helps to guide the search process towards potentially optimal areas. Besides Expected Improvement, there are other acquisition functions you could use: Probability of Improvement (PI): This function selects the next point where the probability of improving upon the best observed point so far is the highest. It tends to focus more on exploitation rather than exploration. Lower Confidence Bound (LCB): This function chooses the point that has the lowest value of the surrogate model minus a constant times the uncertainty. By adjusting the constant, we can control the balance between exploration and exploitation. Upper Confidence Bound (UCB): Similar to LCB, but it chooses the point that has the highest value of the surrogate model plus a constant times the uncertainty. It is used when we want to maximize the objective function. Entropy Search (ES) and Predictive Entropy Search (PES): These are more complex acquisition functions that aim to reduce the entropy of the distribution over the minimum of the objective function. Knowledge Gradient (KG): This function estimates the improvement in the optimal solution resulting from the addition of a sample at a specific location. The choice of acquisition function depends on your specific problem and computational resources, as some acquisition functions are more computationally demanding than others. Let us now proceed to implementing hyperparameter tuning using Optuna.--DIVIDER--# Introducing Optuna Optuna, is a Python library that specializes in optimization tasks. We will use it for tuning the hyperparameters for our random forest classifier. To install Optuna, you can use pip, a package installer for Python. Open your command line and run the following command: ```python pip install optuna ``` :::info{title="Info"} Optuna vs. Other Libraries While Optuna is an excellent tool for hyperparameter tuning machine learning models, it's worth noting that it isn't the only one. Other libraries such as Hyperopt, Scikit-Optimize, and Spearmint also offer robust hyperparameter tuning capabilities. Each of these libraries comes with their unique features and advantages, and the choice of library often comes down to specific project requirements and personal preference. ::: Optuna offers a few methods for hyperparameter tuning, including Grid Search, Random Search, and Bayesian Optimization. In this tutorial, we'll focus on Bayesian Optimization. To execute hyperparameter tuning using Bayesian optimization in Optuna, we require three main components: 1- A function to optimize: This is typically the model's performance evaluated on a validation set with a given set of hyperparameters. 2- A search space: This is a predefined range of potential hyperparameter values within which we are interested in searching. 3- An sampler: This refers to the particular optimization method used to search through the hyperparameter space. In Optuna, one commonly used sampler is TPE Sampler. --DIVIDER--## Function to optimize In a machine learning context, the function to optimize is often a wrapper around your model's training and evaluation procedure. This function should take as input a set of hyperparameters, train your model with these hyperparameters, evaluate it on a validation set, and then return the evaluation metric you wish to minimize (or the negative of the metric you wish to maximize). Here is a pseudo-code example to illustrate this concept: ```python from sklearn.ensemble import GradientBoostingClassifier from sklearn.model_selection import cross_val_score # Suppose we have some data loaded in X and y X, y = load_your_data() def objective_function(params): # params is a list of hyperparameters learning_rate, n_estimators, criterion = params # Create and train the model with the provided hyperparameters model = GradientBoostingClassifier(learning_rate=learning_rate, n_estimators=n_estimators, criterion=criterion) # Evaluate the model using cross-validation score = cross_val_score(model, X, y, cv=5).mean() # Since we want to maximize the cross-validation score (which is accuracy), # but Bayesian optimization frameworks typically minimize the objective, # we return the negative of the score return -score ``` In the code example above, the function objective_function is the one we want to minimize. Note we are using 5-fold cross validation in this example, but you could do any type of validation you want, including a simple train-test split. ## Defining a Search Space In Optuna, the search space is defined directly within the objective function, where each hyperparameter is specified using Optuna's domain-specific language (DSL). Hyperparameters are defined individually through methods such as **suggest_float** for continuous parameters, **suggest_int** for integer parameters, and **suggest_categorical** for choosing from a set of categorical options. Each of these methods takes arguments that define the range and characteristics of the hyperparameter, tailored to the specific needs of the optimization task. Here is an example of defining a search space: ```python import optuna # Define the hyperparameters search space learning_rate = trial.suggest_float("learning_rate", 0.01, 0.2) n_estimators = trial.suggest_int("n_estimators", 50, 1000) criterion = trial.suggest_categorical("criterion", ["friedman_mse", "mse", "mae"]) ``` In this example, we have three hyperparameters: learning_rate which is a continuous value sampled in a log-uniform way between 0.01 and 0.1, n_estimators which is an integer value between 10 and 100, and criterion which can be "friedman_mse", "mse" or "mae". The use of prior="log-uniform" for the learning rate means that we initially believe that all scales of learning rate within this range are equally likely to be useful, but we are interested in resolving smaller values with higher precision. However, it's crucial to understand that this is just a prior belief. As Bayesian optimization proceeds, it utilizes the data collected about the function's performance under different hyperparameters to update this belief. The actual search for hyperparameters, driven by the acquisition function, won't remain uniform but will be more focused on regions that are likely to offer better results according to the updated belief, also known as the posterior. This approach to exploration and exploitation allows the Bayesian optimization process to efficiently navigate the search space. ## Defining a sampler: TPESampler The sampler in Optuna refers to the method used to search through the hyperparameter space. The TPESampler performs Bayesian Optimization using Tree-structured Parzen Estimators. The sampler aims to find the minimum value of the optimization function (often the validation error of the machine learning model) within the given search space, taking into account both the prior distribution of the hyperparameters and the observations made so far to update the posterior belief about the function. --DIVIDER--## Example of Bayesian Optimization with Optuna Let's demonstrate Bayesian Optimization using Optuna with an example. Suppose we are trying to find the minimum of the function 𝑓(𝑥)=(𝑥−2)^2. Here, 𝑥 is the parameter that we are trying to optimize so that we get the minimum value for 𝑓(𝑥). This is a simple enough example that we could just use basic derivative calculus to find the minimum of this function: take the derivative of 𝑓(𝑥) with respect to 𝑥, set it equal to zero, and solve for 𝑥. The best value of 𝑥 is 2.0 yielding 𝑓∗(𝑥)=0. However, let's assume that we don't know the function 𝑓(𝑥) - it's a black box to us. We can only sample from it - meaning, we can only evaluate the function at different values of 𝑥 and observe the output. We want to find the minimum of 𝑓(𝑥) by sampling it at different values of 𝑥, and to do it in as few samples as possible. So, we will use Optuna to solve this optimization problem. Here is the code: ```python import optuna # Define the function to optimize def objective(trial): # Define the search space for x x = trial.suggest_float('x', -10, 10) return (x - 2) ** 2 # Create a study object that aims to minimize the objective function study = optuna.create_study(direction='minimize') # Optimize the study, specifying the number of optimization calls study.optimize(objective, n_trials=200) # Get the best parameters and the achieved minimum value best_params = study.best_params best_value = study.best_value print("Best value of x: ", best_params['x']) print("Minimum of (x-2)^2: ", best_value) ``` output: ``` Best value of x: 2.0006210254762364 Minimum of (x-2)^2: 3.856726421346927e-07 ``` In this example, TPESampler performs Bayesian optimization over the given search space to find the value of 𝑥 that minimizes the function 𝑓(𝑥). We are using 200 trials to find the optimum value of 𝑥.The search results indicate that the best value of 𝑥 is 2.0006, which is very close to the actual optimal value of 2.0. The minimum value of 𝑓(𝑥) is 3.856e-07, which is very close to 0. This is a toy example to understand the basic concept. In actual machine learning applications, the function to optimize would often be more complex, and the search space would typically contain multiple dimensions representing different hyperparameters. The function to optimize would return the validation error of the machine learning model for a given set of hyperparameters. --DIVIDER--# Best Practices for HPT 1- Define a reasonable hyperparameter space: Hyperparameters can take on any value, but not all values are meaningful. For example, a learning rate might be best within a certain range, and the number of layers in a neural network must be a positive integer. Therefore, defining a reasonable hyperparameter space can help speed up the tuning process and improve results. 2- Start with random or grid search: If you have no idea where to start, random search or grid search can be a good starting point for hyperparameter tuning. These methods can give you a rough idea of what values work best, which can help you narrow down the hyperparameter space for more advanced tuning methods. 3- Use Bayesian Optimization wisely: Bayesian optimization is an advanced technique that can find optimal hyperparameters efficiently, but it also requires careful setup. You need to define the objective function, acquisition function, and surrogate model. Also, Bayesian optimization can take a long time to run, so it may not be the best choice for quick and dirty experiments. 4- Re-evaluate periodically: As you run more experiments and collect more data, it's a good idea to re-run hyperparameter tuning to see if the optimal hyperparameters have changed. 5- Save your results: Always record the results of your hyperparameter tuning, including the hyperparameters tested and the performance of the model. This information can be invaluable for future tuning efforts and for troubleshooting any problems. 6- Ensure a robust evaluation process: The robustness of the evaluation process is vital when tuning hyperparameters. If minor changes in the dataset lead to drastically different loss values, then the evaluation process is not robust and can lead to erroneous conclusions about the optimal hyperparameters. Techniques such as cross-validation can be used to help ensure that the selected hyperparameters perform well on unseen data, providing a more robust and reliable evaluation. --DIVIDER--# Full code example ```python import torch import optuna import torch.nn as nn import torch.optim as optim from torch.utils.data import DataLoader, TensorDataset from sklearn.datasets import load_breast_cancer from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from optuna.visualization import plot_optimization_history, plot_param_importances # Load the Breast Cancer dataset data = load_breast_cancer() X, y = data.data, data.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Standardize the data scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) # Convert to PyTorch tensors X_train = torch.Tensor(X_train) X_test = torch.Tensor(X_test) y_train = torch.Tensor(y_train).long() y_test = torch.Tensor(y_test).long() # Create TensorDatasets and DataLoaders train_dataset = TensorDataset(X_train, y_train) test_dataset = TensorDataset(X_test, y_test) train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True) test_loader = DataLoader(test_dataset, batch_size=64, shuffle=False) # Define the Neural Network architecture class BreastCancerNet(nn.Module): def __init__(self, num_units, num_layers): super(BreastCancerNet, self).__init__() layers = [] input_dim = X_train.shape[1] for _ in range(num_layers): layers.append(nn.Linear(input_dim, num_units)) layers.append(nn.ReLU()) input_dim = num_units # Set input_dim to num_units for the next layer layers.append(nn.Linear(num_units, 2)) # Binary classification self.layers = nn.Sequential(*layers) def forward(self, x): return self.layers(x) # Define the training and evaluation function def train_and_evaluate(model, train_loader, test_loader, optimizer, criterion, device): model.to(device) model.train() for data, target in train_loader: data, target = data.to(device), target.to(device) optimizer.zero_grad() output = model(data) loss = criterion(output, target) loss.backward() optimizer.step() model.eval() correct = 0 with torch.no_grad(): for data, target in test_loader: data, target = data.to(device), target.to(device) output = model(data) pred = output.argmax(dim=1, keepdim=True) correct += pred.eq(target.view_as(pred)).sum().item() accuracy = correct / len(test_loader.dataset) return accuracy # Define the objective function for Optuna def objective(trial): # Suggest values for the hyperparameters num_layers = trial.suggest_int('num_layers', 1, 5) num_units = trial.suggest_int('num_units', 10, 100) lr = trial.suggest_loguniform('lr', 1e-5, 1e-1) model = BreastCancerNet(num_units=num_units, num_layers=num_layers) optimizer = optim.Adam(model.parameters(), lr=lr) criterion = nn.CrossEntropyLoss() # Use a GPU if available device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # Train and evaluate the model accuracy = train_and_evaluate(model, train_loader, test_loader, optimizer, criterion, device) return accuracy # Create the Optuna study and start the optimization study = optuna.create_study(direction='maximize', sampler=optuna.samplers.TPESampler()) study.optimize(objective, n_trials=20) print("Best trial:") trial = study.best_trial print(f" Accuracy: {trial.value:.4f}") print(" Params: ") for key, value in trial.params.items(): print(f" {key}: {value}") # Plot the optimization history fig1 = plot_optimization_history(study) fig1.show() # Plot the parameter importances fig2 = plot_param_importances(study) fig2.show() ``` output: ``` [I 2024-08-01 17:52:30,443] Trial 0 finished with value: 0.6228070175438597 and parameters: {'num_layers': 3, 'num_units': 10, 'lr': 0.004363460865583532}. Best is trial 0 with value: 0.6228070175438597. [I 2024-08-01 17:52:30,451] Trial 1 finished with value: 0.6228070175438597 and parameters: {'num_layers': 3, 'num_units': 77, 'lr': 1.842598326858712e-05}. Best is trial 0 with value: 0.6228070175438597. [I 2024-08-01 17:52:30,462] Trial 2 finished with value: 0.9385964912280702 and parameters: {'num_layers': 5, 'num_units': 55, 'lr': 0.003609795782359115}. Best is trial 2 with value: 0.9385964912280702. [I 2024-08-01 17:52:30,468] Trial 3 finished with value: 0.8771929824561403 and parameters: {'num_layers': 1, 'num_units': 47, 'lr': 0.0021827521866422095}. Best is trial 2 with value: 0.9385964912280702. [I 2024-08-01 17:52:30,474] Trial 4 finished with value: 0.9912280701754386 and parameters: {'num_layers': 2, 'num_units': 11, 'lr': 0.034868374724776115}. Best is trial 4 with value: 0.9912280701754386. [I 2024-08-01 17:52:30,484] Trial 5 finished with value: 0.9736842105263158 and parameters: {'num_layers': 5, 'num_units': 78, 'lr': 0.039365893118632374}. Best is trial 4 with value: 0.9912280701754386. [I 2024-08-01 17:52:30,493] Trial 6 finished with value: 0.7719298245614035 and parameters: {'num_layers': 3, 'num_units': 51, 'lr': 4.1074627423742965e-05}. Best is trial 4 with value: 0.9912280701754386. [I 2024-08-01 17:52:30,501] Trial 7 finished with value: 0.6228070175438597 and parameters: {'num_layers': 5, 'num_units': 20, 'lr': 1.2775137699933669e-05}. Best is trial 4 with value: 0.9912280701754386. [I 2024-08-01 17:52:30,508] Trial 8 finished with value: 0.8947368421052632 and parameters: {'num_layers': 2, 'num_units': 78, 'lr': 0.03681168690395871}. Best is trial 4 with value: 0.9912280701754386. [I 2024-08-01 17:52:30,515] Trial 9 finished with value: 0.631578947368421 and parameters: {'num_layers': 4, 'num_units': 42, 'lr': 0.0008712820687871211}. Best is trial 4 with value: 0.9912280701754386. [I 2024-08-01 17:52:30,526] Trial 10 finished with value: 0.9473684210526315 and parameters: {'num_layers': 1, 'num_units': 26, 'lr': 0.08606752240712784}. Best is trial 4 with value: 0.9912280701754386. [I 2024-08-01 17:52:30,537] Trial 11 finished with value: 0.9824561403508771 and parameters: {'num_layers': 2, 'num_units': 91, 'lr': 0.017944767247784325}. Best is trial 4 with value: 0.9912280701754386. [I 2024-08-01 17:52:30,551] Trial 12 finished with value: 0.9824561403508771 and parameters: {'num_layers': 2, 'num_units': 100, 'lr': 0.011206424000488947}. Best is trial 4 with value: 0.9912280701754386. [I 2024-08-01 17:52:30,561] Trial 13 finished with value: 0.956140350877193 and parameters: {'num_layers': 2, 'num_units': 97, 'lr': 0.00038158687510912465}. Best is trial 4 with value: 0.9912280701754386. [I 2024-08-01 17:52:30,572] Trial 14 finished with value: 0.9736842105263158 and parameters: {'num_layers': 2, 'num_units': 38, 'lr': 0.015384014691228767}. Best is trial 4 with value: 0.9912280701754386. [I 2024-08-01 17:52:30,583] Trial 15 finished with value: 0.8859649122807017 and parameters: {'num_layers': 1, 'num_units': 69, 'lr': 0.0002600382900329717}. Best is trial 4 with value: 0.9912280701754386. [I 2024-08-01 17:52:30,595] Trial 16 finished with value: 0.9298245614035088 and parameters: {'num_layers': 4, 'num_units': 64, 'lr': 0.09003127627687967}. Best is trial 4 with value: 0.9912280701754386. [I 2024-08-01 17:52:30,607] Trial 17 finished with value: 0.956140350877193 and parameters: {'num_layers': 2, 'num_units': 89, 'lr': 0.012919008952086327}. Best is trial 4 with value: 0.9912280701754386. [I 2024-08-01 17:52:30,618] Trial 18 finished with value: 0.956140350877193 and parameters: {'num_layers': 3, 'num_units': 31, 'lr': 0.026241644811039317}. Best is trial 4 with value: 0.9912280701754386. [I 2024-08-01 17:52:30,628] Trial 19 finished with value: 0.9649122807017544 and parameters: {'num_layers': 1, 'num_units': 63, 'lr': 0.008288502687247774}. Best is trial 4 with value: 0.9912280701754386. Best trial: Accuracy: 0.9912 Params: num_layers: 2 num_units: 11 lr: 0.034868374724776115 ```--DIVIDER-- ![optimization_history.png](optimization_history.png) ![hp_importance.png](hp_importance.png)--DIVIDER--# Summary This tutorial guided you through setting up hyperparameter tuning using Bayesian Optimization, defining the objective function, specifying the hyperparameter space, and running the tuning process. Key takeaways included the importance of defining an appropriate hyperparameter space, wisely utilizing Bayesian Optimization, and the necessity of diligent results tracking for future tuning efforts.
TLqRdPFx8Bjt
ready-tensor
cc-by-sa
A Comprehensive Comparison of AutoML Libraries for Binary Classification
![AutoML.jpg](AutoML.jpg) # Introduction In the rapidly evolving world of machine learning, AutoML libraries have become essential tools for data scientists and machine learning engineers. These libraries automate the time-consuming process of model selection, hyperparameter tuning, and feature engineering, making it easier to develop high-performing models. In this article, we will compare several popular AutoML libraries— AutoKeras, Auto-Sklearn, AutoGluon, H2o, FLAML, Lazy Predict, MLBox, mljar-supervised, TPOT, and PyCaret—on binary classification problems using the Ready Tensor platform. --DIVIDER--# Methodology To ensure a fair evaluation, we employed the standardized methods and Docker-based environment provided by the Ready Tensor platform. This platform ensures consistent dataset processing and benchmarking, allowing us to focus on the performance of the AutoML libraries without the influence of extraneous variables. We evaluated each library's performance on multiple datasets, using AUC (Area Under the Curve) as our primary metric, alongside training time and RAM usage as performance indicators. These metrics provide a comprehensive view of each model's ability to distinguish between classes, as well as its efficiency. The datasets used span a variety of domains, ensuring that our results are robust and generalizable. Each library was tasked with creating a binary classification model for each dataset, and the average AUC, training time, and RAM usage were recorded. ## Preprocessing No additional preprocessing was performed on the datasets, as the AutoML libraries are designed to handle feature engineering, data cleaning, and other preprocessing tasks automatically.--DIVIDER--## Datasets | Dataset | Industry | Observations | Features | Has Categorical Features? | Has Missing Values? | Balance Status | | ------------------------------------------------- | ------------------------------ | :----------: | :------: | :-----------------------: | :-----------------: | :------------: | | Breast Cancer - Wisconsin | Biosciences / Healthcare | 569 | 32 | no | no | Balanced | | Concentric Spheres Dataset | None (synthetic) | 3,000 | 9 | no | yes | Balanced | | In-vehicle coupon recommendation | E-commerce | 12,684 | 25 | yes | yes | Balanced | | Credit Approval | Financial services | 690 | 15 | yes | yes | Balanced | | Electrical Grid Stability Simulated Data Data Set | Energy | 10,000 | 13 | no | no | Slighly Imbalanced | | Employee Attrition dataset from PyCaret | Miscellaneous / Human Resource | 14,999 | 9 | yes | no | Slighly Imbalanced | | Image Segmentation | Computer Vision | 2,310 | 20 | no | no | Imbalanced | | Mushroom Data Set | Biosciences | 8,124 | 22 | yes | yes | Balanced | | NBA binary classification dataset from Pycaret | Sports | 1,294 | 21 | no | no | Balanced | | Online Shoppers Purchasing Intention | E-commerce | 12,330 | 17 | yes | no | Imbalanced | | Spambase Data Set | Technology / Internet Services | 4,601 | 57 | no | no | Balanced | | Spiral Dataset | None (synthetic) | 250 | 2 | no | no | Balanced | | Telco customer churn | Telecom | 7,043 | 20 | no | yes | Imbalanced | | Titanic Passenger Survival dataset | Tourism / Transportation | 1,309 | 10 | yes | yes | Balanced | | Exclusive-Or dataset | None (synthetic) | 6,000 | 5 | no | no | Balanced | For more details about the dataset: https://github.com/readytensor/rt-datasets-binary-classification --DIVIDER--# Results ## AUC Scores In this section we recorded AUC for each model across all datasets, ranked from left to right for overall average score: | Dataset \ AutoML Library | AutoGluon | FLAML | TPOT | PyCaret | mljar-supervised | LazyPredict | H2O AutoML | Auto-sklearn | MLBox | AutoKeras | |---------------------|-------------|-------------|------------| ------------| ------------| ------------| ------------| ------------| ------------| ------------| | Breast Cancer - Wisconsin| 0.969 |0.997 |0.99 |0.997| 0.992| 0.996| 0.995| 0.998| 0.994| 0.996| Concentric Spheres|0.988| 0.993| 0.99 |0.986| 0.991| 0.986| 0.994| 0.984| 0.994| 0.983| In-vehicle coupon recommendation| 0.844| 0.836| 0.792| 0.826| 0.835| 0.805| 0.831| 0.776| 0.823| 0.73| Credit Approval| 0.986| 0.918| 0.893| 0.907| 0.919| 0.845| 0.913| 0.892| 0.902| 0.857| Electrical Grid Stability Simulated Data| 0.995| 0.992 |0.996 |0.995 |0.989 |0.994| 0.987| 0.972| 0.986| 0.99| Employee Attrition dataset from PyCaret| 1| 0.991 |0.991| 0.99| 0.991| 0.99| 0.99| 0.991| 0.992| 0.737| Image Segmentation| 1| 1| 1| 1| 0.983| 0.999| 0.996| 1| 0.991| 1| Mushroom | 1 |1| 1| 1 |1 |1 |1| 1| 1| 1| NBA binary classification from Pycaret| 0.769| 0.746| 0.768| 0.765| 0.727| 0.759| 0.764| 0.729| 0.768| 0.745| Online Shoppers Purchasing Intention| 0.926| 0.932| 0.922 |0.92| 0.929| 0.918| 0.928| 0.933| 0.931| 0.903| Spambase| 1| 0.984| 0.988| 0.986 |0.986 |0.986| 0.988| 0.986| 0.988| 0.972| Spiral |0.998| 0.987| 1| 0.916| 0.881| 0.899| 0.896| 0.931| 0.765| 0.502| Telco customer churn| 0.862 |0.856| 0.841| 0.862| 0.846| 0.862| 0.866| 0.859| 0.863| 0.838| Titanic Passenger Survival | 0.86| 0.866| 0.852| 0.861| 0.859| 0.868| 0.855| 0.81| 0.864| 0.772| Exclusive-Or |0.962| 0.994| 0.97| 0.955| 0.972| 0.94| 0.778| 0.915| 0.487| 0.59| Average AUC over all the datasets | 0.944 | 0.939 | 0.933 | 0.931 | 0.927 | 0.923 | 0.919 | 0.918 | 0.89 | 0.841 | Most of the models produced AUC scores that were remarkably close, ranging between 0.94 and 0.92. :::info{title="Insight"} AutoGluon secured the top spot with the highest AUC score ::: Additionally, the majority of the packages demonstrated similarly close performance on imbalanced datasets, indicating their robustness and adaptability to varying data distributions. --DIVIDER--## Execution Time The next two graphs further demonstrate the performance of the models regarding execution time in both training and prediction for the "In-vehicle coupon recommendation" dataset. This dataset recorded the highest execution time across all the datasets, providing a clear view of how each model performs under more time-intensive conditions. ![Training_ET_bar_plot.png](Training_ET_bar_plot.png) :::info{title="Insight"} Lazy Predict scores the lowest training execution time, meanwhile AutoKeras, TPOT, mljar-supervised, Auto-Sklearn have high training execution times. ::: ![Prediction_ET_bar_plot.png](Prediction_ET_bar_plot.png) :::info{title="Insight"} mljar-supervised by far has the highest execution time for prediction. ::: > *For more details about the execution times across all the models please check the appendix* --DIVIDER--## CPU RAM usage ![CPU_Mem_bar_plot.png](CPU_Mem_bar_plot.png) :::info{title="Insight"} It's clear that mljar-supervised has the highest CPU memory usage ::: > *For more details about the CPU memory usages across all the models please check the appendix* --DIVIDER--# Conclusion While AutoML libraries promise to streamline the machine learning process, this comparison reveals that not all tools are created equal. The close AUC scores suggest that many of these libraries can effectively handle binary classification tasks, but the differences in execution time and resource consumption tell a more nuanced story. Lazy Predict’s speed highlights its utility for rapid prototyping, yet its simplicity may not suit more complex tasks. Meanwhile, the slower prediction times of mljar-supervised might be acceptable in exchange for higher accuracy in some cases, but they could be a bottleneck in time-sensitive applications. AutoGluon's top AUC score positions it as a strong contender for those prioritizing predictive performance, yet its resource demands may not be ideal for every scenario. The slight edge in AUC scores across most models implies that AutoML has made significant strides, but it also raises the question: Are these tools truly ready to replace the nuanced decision-making of a skilled data scientist? In a field where every second and byte of RAM can impact performance and costs, the choice of an AutoML library must be as strategic as the problem itself. This analysis underscores the importance of balancing speed, accuracy, and resource efficiency when selecting the right tool for your specific needs. The promise of AutoML is clear, but the reality is that the choice of library can still significantly impact the success of your machine learning projects. --DIVIDER--# Appendix <br> ## Execution Time - ET <br> <table> <tr> <th >Dataset \ AutoML Library</th> <th colspan="2">AutoGluon</th><th colspan="2">FLAML</th><th colspan="2">TPOT</th><th colspan="2">PyCaret</th><th colspan="2">mljar-supervised</th><th colspan="2">LazyPredict</th><th colspan="2">H2O AutoML</th><th colspan="2">Auto-Sklearn</th><th colspan="2">MLBox</th><th colspan="2">AutoKeras</th> <tr><th>Metric</th><td>Training ET</td><td>Prediction ET</td><td>Training ET</td><td>Prediction ET</td><td>Training ET</td><td>Prediction ET</td><td>Training ET</td><td>Prediction ET</td><td>Training ET</td><td>Prediction ET</td><td>Training ET</td><td>Prediction ET</td><td>Training ET</td><td>Prediction ET</td><td>Training ET</td><td>Prediction ET</td><td>Training ET</td><td>Prediction ET</td><td>Training ET</td><td>Prediction ET</td></tr> <tr><th>Breast Cancer - Wisconsin</th><td>27.13</td><td>0.14</td><td>60.09</td><td>0.17</td><td>59.83</td><td>0.13</td><td>21.26</td><td>0.29</td><td>194.74</td><td>29.59</td><td>1.73</td><td>0.24</td><td>40.22</td><td>5.79</td><td>182.1</td><td>6.6</td><td>5.73</td><td>0.8</td><td>143.77</td><td>1.0</td></tr> <tr><th>Concentric Spheres Dataset</th><td>36.91</td><td>5.68</td><td>60.3</td><td>0.14</td><td>109.92</td><td>0.16</td><td>28.86</td><td>0.31</td><td>196.64</td><td>20.57</td><td>3.85</td><td>0.25</td><td>52.44</td><td>5.88</td><td>175.48</td><td>1.04</td><td>9.41</td><td>0.46</td><td>408.55</td><td>0.88</td></tr> <tr><th>Credit Approval</th><td>27.95</td><td>0.54</td><td>60.2</td><td>0.18</td><td>68.78</td><td>0.13</td><td>29.32</td><td>0.64</td><td>193.22</td><td>25.99</td><td>1.72</td><td>0.25</td><td>24.17</td><td>6.19</td><td>177.73</td><td>2.72</td><td>18.83</td><td>1.25</td><td>52.8</td><td>0.9</td></tr> <tr><th>Electrical Grid Stability Simulated Data Data Set</th><td>136.32</td><td>5.87</td><td>61.66</td><td>0.18</td><td>499.87</td><td>0.08</td><td>58.92</td><td>0.31</td><td>200.16</td><td>19.22</td><td>11.66</td><td>0.25</td><td>121.8</td><td>6.42</td><td>176.23</td><td>4.47</td><td>16.11</td><td>0.78</td><td>634.69</td><td>0.97</td></tr> <tr><th>Employee Attrition dataset from PyCaret</th><td>84.27</td><td>0.6</td><td>60.35</td><td>0.17</td><td>369.85</td><td>0.2</td><td>53.86</td><td>0.48</td><td>202.49</td><td>19.48</td><td>12.7</td><td>0.34</td><td>111.57</td><td>6.76</td><td>178.27</td><td>6.06</td><td>28.01</td><td>1.07</td><td>968.18</td><td>1.16</td></tr> <tr><th>Exclusive-Or dataset</th><td>64.28</td><td>6.36</td><td>60.24</td><td>0.14</td><td>248.57</td><td>0.07</td><td>37.25</td><td>0.32</td><td>192.89</td><td>19.4</td><td>6.19</td><td>0.22</td><td>39.09</td><td>6.05</td><td>173.75</td><td>0.6</td><td>9.21</td><td>0.42</td><td>281.57</td><td>0.85</td></tr> <tr><th>Image Segmentation</th><td>29.39</td><td>0.32</td><td>60.09</td><td>0.17</td><td>116.05</td><td>0.15</td><td>25.56</td><td>0.33</td><td>202.42</td><td>21.62</td><td>2.81</td><td>0.23</td><td>73.3</td><td>5.97</td><td>177.6</td><td>6.15</td><td>11.77</td><td>0.67</td><td>285.32</td><td>0.93</td></tr> <tr><th>In-vehicle coupon recommendation</th><td>160.17</td><td>8.11</td><td>60.79</td><td>0.31</td><td>530.31</td><td>0.39</td><td>80.8</td><td>0.86</td><td>195.02</td><td>17.82</td><td>19.37</td><td>0.6</td><td>94.35</td><td>8.36</td><td>182.46</td><td>7.07</td><td>77.49</td><td>4.01</td><td>273.01</td><td>1.27</td></tr> <tr><th>Mushroom Data Set</th><td>80.05</td><td>0.38</td><td>60.18</td><td>0.2</td><td>157.03</td><td>0.16</td><td>54.11</td><td>0.73</td><td>197.1</td><td>20.67</td><td>5.2</td><td>0.42</td><td>204.96</td><td>7.37</td><td>179.83</td><td>9.53</td><td>59.15</td><td>2.86</td><td>244.23</td><td>1.23</td></tr> <tr><th>NBA binary classification dataset from Pycaret</th><td>25.71</td><td>6.17</td><td>60.13</td><td>0.17</td><td>98.32</td><td>0.16</td><td>22.44</td><td>0.29</td><td>198.42</td><td>21.25</td><td>2.16</td><td>0.23</td><td>32.71</td><td>6.03</td><td>175.21</td><td>0.7</td><td>6.86</td><td>0.55</td><td>118.38</td><td>0.85</td></tr> <tr><th>Online Shoppers Purchasing Intention </th><td>70.14</td><td>2.06</td><td>60.14</td><td>0.27</td><td>789.96</td><td>0.26</td><td>54.19</td><td>0.51</td><td>196.91</td><td>19.94</td><td>13.72</td><td>0.41</td><td>75.47</td><td>6.86</td><td>178.55</td><td>5.49</td><td>51.66</td><td>2.1</td><td>494.98</td><td>1.32</td></tr> <tr><th>Spambase Data Set</th><td>41.47</td><td>0.52</td><td>60.2</td><td>0.25</td><td>576.36</td><td>0.23</td><td>35.75</td><td>0.4</td><td>201.19</td><td>21.2</td><td>6.08</td><td>0.33</td><td>95.22</td><td>6.35</td><td>177.45</td><td>6.78</td><td>19.78</td><td>1.53</td><td>148.28</td><td>1.14</td></tr> <tr><th>Spiral Dataset</th><td>25.61</td><td>0.1</td><td>60.6</td><td>0.33</td><td>29.57</td><td>0.05</td><td>18.57</td><td>0.28</td><td>197.72</td><td>36.01</td><td>0.91</td><td>0.17</td><td>19.31</td><td>5.66</td><td>178.1</td><td>5.54</td><td>2.77</td><td>0.26</td><td>65.73</td><td>0.79</td></tr> <tr><th>Telco customer churn</th><td>48.07</td><td>6.51</td><td>60.13</td><td>0.19</td><td>330.72</td><td>0.27</td><td>51.61</td><td>0.68</td><td>194.87</td><td>19.78</td><td>8.52</td><td>0.33</td><td>58.81</td><td>6.89</td><td>183.17</td><td>4.26</td><td>38.0</td><td>1.41</td><td>256.81</td><td>0.99</td></tr> <tr><th>Titanic Passenger Survival dataset</th><td>31.11</td><td>2.14</td><td>60.08</td><td>0.17</td><td>46.37</td><td>0.1</td><td>24.4</td><td>0.41</td><td>202.03</td><td>22.25</td><td>1.91</td><td>0.21</td><td>25.89</td><td>5.86</td><td>176.09</td><td>6.55</td><td>10.96</td><td>0.46</td><td>178.77</td><td>0.89</td></tr> <tr><th>Average</th><td>59.24</td><td>3.03</td><td>60.35</td><td>0.2</td><td>268.77</td><td>0.17</td><td>39.79</td><td>0.46</td><td>197.72</td><td>22.32</td><td>6.57</td><td>0.3</td><td>71.29</td><td>6.43</td><td>178.13</td><td>4.9</td><td>24.38</td><td>1.24</td><td>303.67</td><td>1.01</td></tr> </table> --DIVIDER--## CPU Memory usage <br> <table> <tr> <th >Dataset \ AutoML Library</th> <th colspan="2">AutoGluon</th><th colspan="2">FLAML</th><th colspan="2">TPOT</th><th colspan="2">PyCaret</th><th colspan="2">mljar-supervised</th><th colspan="2">LazyPredict</th><th colspan="2">H2O AutoML</th><th colspan="2">Auto-Sklearn</th><th colspan="2">MLBox</th><th colspan="2">AutoKeras</th> <tr><td></td><td>Training CPU Mem</td><td>Prediction CPU Mem</td><td>Training CPU Mem</td><td>Prediction CPU Mem</td><td>Training CPU Mem</td><td>Prediction CPU Mem</td><td>Training CPU Mem</td><td>Prediction CPU Mem</td><td>Training CPU Mem</td><td>Prediction CPU Mem</td><td>Training CPU Mem</td><td>Prediction CPU Mem</td><td>Training CPU Mem</td><td>Prediction CPU Mem</td><td>Training CPU Mem</td><td>Prediction CPU Mem</td><td>Training CPU Mem</td><td>Prediction CPU Mem</td><td>Training CPU Mem</td><td>Prediction CPU Mem</td></tr> <tr><th>Breast Cancer - Wisconsin</th><td>69.61</td><td>2.96</td><td>4.19</td><td>3.07</td><td>4.88</td><td>2.49</td><td>12.46</td><td>4.13</td><td>235.7</td><td>453.81</td><td>2.0</td><td>3.95</td><td>3.07</td><td>2.98</td><td>35.96</td><td>100.81</td><td>3.24</td><td>4.67</td><td>10.02</td><td>6.23</td></tr> <tr><th>Concentric Spheres Dataset</th><td>75.32</td><td>58.47</td><td>7.94</td><td>3.16</td><td>4.47</td><td>5.82</td><td>13.41</td><td>4.58</td><td>157.18</td><td>335.87</td><td>2.1</td><td>5.47</td><td>3.73</td><td>3.78</td><td>17.64</td><td>25.99</td><td>4.68</td><td>4.07</td><td>9.48</td><td>5.88</td></tr> <tr><th>Credit Approval</th><td>71.94</td><td>11.84</td><td>3.32</td><td>2.74</td><td>4.69</td><td>3.99</td><td>12.8</td><td>6.2</td><td>186.65</td><td>408.91</td><td>2.46</td><td>4.98</td><td>3.26</td><td>2.73</td><td>18.7</td><td>41.94</td><td>3.88</td><td>5.68</td><td>9.92</td><td>6.16</td></tr> <tr><th>Electrical Grid Stability Simulated Data Data Set</th><td>118.44</td><td>62.26</td><td>11.73</td><td>5.92</td><td>7.67</td><td>2.54</td><td>15.63</td><td>5.1</td><td>150.23</td><td>339.96</td><td>5.69</td><td>14.94</td><td>3.34</td><td>3.18</td><td>72.5</td><td>270.72</td><td>8.09</td><td>5.41</td><td>11.12</td><td>6.27</td></tr> <tr><th>Employee Attrition dataset from PyCaret</th><td>79.69</td><td>13.55</td><td>18.47</td><td>5.51</td><td>10.5</td><td>6.21</td><td>21.15</td><td>12.47</td><td>158.23</td><td>343.8</td><td>15.83</td><td>17.25</td><td>3.36</td><td>3.24</td><td>63.91</td><td>235.24</td><td>15.13</td><td>11.22</td><td>13.74</td><td>6.93</td></tr> <tr><th>Exclusive-Or dataset</th><td>118.12</td><td>71.28</td><td>11.21</td><td>4.5</td><td>8.22</td><td>1.78</td><td>14.31</td><td>5.09</td><td>144.4</td><td>339.38</td><td>2.47</td><td>15.76</td><td>3.66</td><td>3.15</td><td>11.55</td><td>11.44</td><td>5.0</td><td>4.12</td><td>9.7</td><td>5.86</td></tr> <tr><th>Image Segmentation</th><td>70.97</td><td>7.44</td><td>5.28</td><td>3.09</td><td>5.83</td><td>2.47</td><td>12.97</td><td>4.32</td><td>160.37</td><td>347.06</td><td>3.03</td><td>4.46</td><td>4.04</td><td>4.21</td><td>32.72</td><td>95.67</td><td>5.15</td><td>4.54</td><td>9.71</td><td>5.97</td></tr> <tr><th>In-vehicle coupon recommendation</th><td>183.51</td><td>160.75</td><td>35.41</td><td>11.35</td><td>300.34</td><td>32.18</td><td>52.81</td><td>49.67</td><td>147.8</td><td>292.59</td><td>76.57</td><td>73.4</td><td>3.59</td><td>3.35</td><td>235.27</td><td>811.21</td><td>27.32</td><td>15.15</td><td>22.17</td><td>9.46</td></tr> <tr><th>Mushroom Data Set</th><td>78.66</td><td>8.62</td><td>8.21</td><td>4.2</td><td>158.0</td><td>3.44</td><td>37.05</td><td>9.73</td><td>158.49</td><td>337.16</td><td>49.03</td><td>26.64</td><td>1.57</td><td>1.51</td><td>59.87</td><td>160.75</td><td>15.2</td><td>10.35</td><td>17.36</td><td>7.54</td></tr> <tr><th>NBA binary classification dataset from Pycaret</th><td>73.93</td><td>69.95</td><td>4.71</td><td>3.15</td><td>4.72</td><td>6.87</td><td>12.88</td><td>4.19</td><td>154.63</td><td>335.97</td><td>2.19</td><td>5.99</td><td>3.59</td><td>3.08</td><td>11.47</td><td>10.94</td><td>4.18</td><td>4.34</td><td>9.45</td><td>5.87</td></tr> <tr><th>Online Shoppers Purchasing Intention </th><td>117.25</td><td>33.54</td><td>15.53</td><td>6.77</td><td>18.42</td><td>20.03</td><td>22.89</td><td>17.72</td><td>149.84</td><td>343.59</td><td>29.59</td><td>32.92</td><td>3.47</td><td>3.27</td><td>65.39</td><td>271.13</td><td>26.99</td><td>23.75</td><td>18.65</td><td>8.18</td></tr> <tr><th>Spambase Data Set</th><td>81.78</td><td>12.15</td><td>16.0</td><td>6.31</td><td>10.67</td><td>7.34</td><td>20.96</td><td>5.69</td><td>162.84</td><td>346.17</td><td>12.19</td><td>12.56</td><td>4.01</td><td>3.31</td><td>56.79</td><td>224.29</td><td>15.08</td><td>11.14</td><td>13.24</td><td>6.7</td></tr> <tr><th>Spiral Dataset</th><td>69.99</td><td>1.66</td><td>3.35</td><td>5.97</td><td>3.7</td><td>1.33</td><td>12.85</td><td>4.6</td><td>268.4</td><td>541.23</td><td>1.39</td><td>3.59</td><td>1.75</td><td>1.56</td><td>31.58</td><td>88.57</td><td>2.51</td><td>2.97</td><td>9.05</td><td>5.73</td></tr> <tr><th>Telco customer churn</th><td>113.11</td><td>62.06</td><td>7.64</td><td>4.06</td><td>14.04</td><td>14.53</td><td>20.72</td><td>5.82</td><td>151.79</td><td>335.25</td><td>15.61</td><td>25.49</td><td>3.87</td><td>3.31</td><td>45.66</td><td>161.35</td><td>12.92</td><td>9.13</td><td>14.25</td><td>6.62</td></tr> <tr><th>Titanic Passenger Survival dataset</th><td>71.44</td><td>41.16</td><td>4.0</td><td>3.05</td><td>4.77</td><td>3.89</td><td>12.92</td><td>4.59</td><td>164.03</td><td>351.83</td><td>2.04</td><td>6.43</td><td>3.5</td><td>3.46</td><td>35.2</td><td>140.94</td><td>4.23</td><td>4.17</td><td>9.85</td><td>6.16</td></tr> <tr><th>Average</th><td>92.92</td><td>41.18</td><td>10.47</td><td>4.86</td><td>37.39</td><td>7.66</td><td>19.72</td><td>9.59</td><td>170.04</td><td>363.51</td><td>14.81</td><td>16.92</td><td>3.32</td><td>3.07</td><td>52.95</td><td>176.73</td><td>10.24</td><td>8.05</td><td>12.51</td><td>6.64</td></tr> </table>
tum5RnE4A5W8
ready-tensor
cc-by-sa
Balancing the Scales: A Comprehensive Study on Tackling Class Imbalance in Binary Classification
![imbalanced-balance-scale-stretched.webp](imbalanced-balance-scale-stretched.webp)--DIVIDER--TL;DR This study evaluates three strategies for handling imbalanced datasets in binary classification—SMOTE, class weights, and decision threshold calibration—across 15 classifiers and 30 datasets. Results from 9,000 experiments show all methods generally outperform the baseline, with decision threshold calibration emerging as the most consistent performer. However, significant variability across datasets emphasizes the importance of testing multiple approaches for specific problems. --- --DIVIDER--# Abstract Class imbalance in binary classification tasks remains a significant challenge in machine learning, often resulting in poor performance on minority classes. This study comprehensively evaluates three widely-used strategies for handling class imbalance: Synthetic Minority Over-sampling Technique (SMOTE), Class Weights tuning, and Decision Threshold Calibration. We compare these methods against a baseline scenario across 15 diverse machine learning models and 30 datasets from various domains, conducting a total of 9,000 experiments. Performance was primarily assessed using the F1-score, with additional 9 metrics including F2-score, precision, recall, Brier-score, PR-AUC, and AUC. Our results indicate that all three strategies generally outperform the baseline, with Decision Threshold Calibration emerging as the most consistently effective technique. However, we observed substantial variability in the best-performing method across datasets, highlighting the importance of testing multiple approaches for specific problems. This study provides valuable insights for practitioners dealing with imbalanced datasets and emphasizes the need for dataset-specific analysis in evaluating class imbalance handling techniques.--DIVIDER--# Introduction Binary classification tasks frequently encounter imbalanced datasets, where one class significantly outnumbers the other. This imbalance can severely impact model performance, often resulting in classifiers that excel at identifying the majority class but perform poorly on the critical minority class. In fields such as fraud detection, disease diagnosis, and rare event prediction, this bias can have serious consequences. One of the most influential techniques developed to address this challenge is the **Synthetic Minority Over-sampling Technique (SMOTE)**, a method proposed by Chawla et al. (2002) that generates synthetic examples of the minority class. Since its introduction, the SMOTE paper has become one of the most cited papers in the field of imbalanced learning, with over 30,000 citations. SMOTE's popularity has spurred the creation of many other oversampling techniques and numerous SMOTE variants. For example, Kovács (2019) documents 85 SMOTE-variants implemented in Python, including: - **Borderline-SMOTE** (Han et al., 2005) - **Safe-Level-SMOTE** (Bunkhumpornpat et al., 2009) - **SMOTE + Tomek** and **SMOTE + ENN** (Batista et al., 2004) Despite its widespread use, recent studies have raised some criticisms of SMOTE. For instance, Blagus and Lusa (2013) indicate limitations in handling high-dimensional data, while Elor and Averbuch-Elor (2022) and Hulse et al. (2007) suggest the presence of better alternatives for handling class imbalance. This highlights that while SMOTE is a powerful tool, it is not without limitations. In this study, we aim to provide a more balanced view of techniques for handling class imbalance by evaluating not only SMOTE but also other widely-used strategies, such as Class Weights and Decision Threshold Calibration. These three treatment scenarios target class imbalance at different stages of the machine learning pipeline: 1. **SMOTE**: Generating synthetic examples of the minority class during data preprocessing. 2. **Class Weights**: Adjusting the importance of classes during model training. 3. **Decision Threshold Calibration**: Adjusting the classification threshold post-training. We compare these strategies with the **Baseline** approach (standard model training without addressing imbalance) to assess their effectiveness in improving model performance on imbalanced datasets. Our goal is to provide insights into which treatment methods offer the most significant improvements in performance metrics such as F1-score, F2-score, accuracy, precision, recall, MCC, Brier score, Matthews Correlation Coefficient (MCC), PR-AUC, and AUC. We also aim to evaluate these techniques across a wide range of datasets and models to provide a more generalizable understanding of their effectiveness. To ensure a comprehensive evaluation, this study encompasses: - **30 datasets** from various domains, with sample sizes ranging from ~500 to 20,000 and rare class percentages between 1% and 20%. - **15 classifier models**, including tree-based methods, boosting algorithms, neural networks, and traditional classifiers. - Evaluation using 5-fold cross-validation. In total, we conduct 9,000 experiments involving the 4 scenarios, 15 models, 30 datasets, and validation folds. This extensive approach allows us to compare these methods and their impact on model performance across a wide range of scenarios and algorithmic approaches. It provides a robust foundation for understanding the effectiveness of different imbalance handling strategies in binary classification tasks.--DIVIDER--# Methodology ## Datasets We selected 30 datasets based on the following criteria: - Binary classification problems - Imbalanced class distribution (minority class < 20%) - Sample size ≤ 20,000 - Feature count ≤ 100 - Real-world data from diverse domains - Publicly available The characteristics of the selected datasets are summarized in the chart below:--DIVIDER-- ![datasets-summary.png](datasets-summary.png)--DIVIDER-- The dataset selection criteria were carefully chosen to ensure a comprehensive and practical study: - The 20% minority class threshold for class imbalance, while somewhat arbitrary, represents a reasonable cut-off point that is indicative of significant imbalance. - The limitations on sample size (≤ 20,000) and feature count (≤ 100) were set to accommodate a wide range of real-world datasets while ensuring manageable computational resources for an experiment of our scale. This balance allows us to include diverse, practically relevant datasets without compromising the breadth of our study. - The focus on diverse domains ensures that our models are tested across a wide range of industries and data characteristics, enhancing the generalizability of our findings. --DIVIDER-- :::info{title="Info"} <h2> Dataset Repository </h2> You can find the study datasets and information about their sources and specific characteristics in the following repository: [Imbalanced Classification Study Datasets](https://github.com/readytensor/rt-binary-imbalance-datasets) This repository is also linked in the **Datasets** section of this publication. :::--DIVIDER-- ## Models Our study employed a diverse set of 15 classifier models, encompassing a wide spectrum of algorithmic approaches and complexities. This selection ranges from simple baselines to advanced ensemble methods and neural networks, including tree-based models and various boosting algorithms. The diversity in our model selection allows us to assess how different imbalanced data handling techniques perform across various model types and complexities. The following chart lists the models used in our experiments:--DIVIDER-- ![classifiers.png](classifiers.png)--DIVIDER--A key consideration in our model selection process was ensuring that all four scenarios (Baseline, SMOTE, Class Weights, and Decision Threshold Calibration) could be applied consistently to each model. This criterion influenced our choices, leading to the exclusion of certain algorithms such as k-Nearest Neighbors (KNN) and Naive Bayes Classifiers, which do not inherently support the application of class weights. This careful selection process allowed us to maintain consistency across all scenarios while still representing a broad spectrum of machine learning approaches. --DIVIDER-- <h2> Implementation Details </h2> Each model is implemented in a separate repository to accommodate differing dependencies, but all are designed to work with any dataset in a generalized manner. These repositories include: - Training and testing code - Docker containerization for environment-independent usage - Hyperparameter tuning code, where applicable To ensure a fair comparison, we used the same preprocessing pipeline for all 15 models and scenarios. This pipeline includes steps such as one-hot encoding, standard scaling, and missing data imputation. The only difference in preprocessing occurs in the SMOTE scenario, where synthetic minority class examples are generated. Otherwise, the preprocessing steps are identical across all models and scenarios, ensuring that the only difference is the algorithm and the specific imbalance handling technique applied. Additionally, each model's hyperparameters were kept constant across the Baseline, SMOTE, Class Weights, and Decision Threshold scenarios to ensure fair comparisons. The imbalanced data handling scenarios are implemented in a branch named `imbalance`. A configuration file, `model_config.json`, allows users to specify which scenario to run: `baseline`, `smote`, `class_weights`, or `decision_threshold`.--DIVIDER--:::info{title="Info"} <h2> Model Repositories </h2> All model implementations are available in our public repositories, linked in the **Models** section of this publication. :::--DIVIDER--## Evaluation Metrics To comprehensively evaluate the performance of the models across different imbalanced data handling techniques, we tracked the following 10 metrics:--DIVIDER-- ![evaluation-metrics.png](evaluation-metrics.png)--DIVIDER-- Our primary focus is on the **F1-score**, a label metric that uses predicted classes rather than underlying probabilities. The F1-score provides a balanced measure of precision and recall, making it particularly useful for assessing performance on imbalanced datasets. While real-world applications often employ domain-specific cost matrices to create custom metrics, our study spans 30 diverse datasets. The F1-score allows us to evaluate all four scenarios, including decision threshold tuning, consistently across this varied set of problems. Although our analysis emphasizes the F1-score, we report results for all 10 metrics. Readers can find comprehensive information on model performance across all metrics and scenarios in the detailed results repository linked in the Datasets section of this publication.--DIVIDER-- ## Experimental Procedure Our experimental procedure was designed to ensure a robust and comprehensive evaluation of the four imbalance handling scenarios across diverse datasets and models. The process consisted of the following steps: <h2> Dataset Splitting </h2> We employed a form of nested cross-validation for each dataset to ensure robust model evaluation and proper hyperparameter tuning: 1. Outer Loop: 5-fold cross-validation - Each dataset was split into five folds - Results were reported for all five test splits, providing mean and standard deviation values across the folds 2. Inner Validation: 90/10 train-validation split - For scenarios requiring hyperparameter tuning (SMOTE, Class Weights, and Decision Threshold Calibration), the training split from the outer loop was further divided into a 90% train and 10% validation split - The validation split was used exclusively for tuning hyperparameters This nested structure ensures that the test set from the outer loop remains completely unseen during both training and hyperparameter tuning, providing an unbiased estimate of model performance. The outer test set was reserved for final evaluation, while the inner validation set was used solely for hyperparameter optimization in the relevant scenarios. <h2> Scenario Descriptions </h2> We evaluated four distinct scenarios for handling class imbalance: 1. **Baseline**: This scenario involves standard model training without any specific treatment for class imbalance. It serves as a control for comparing the effectiveness of the other strategies. 2. **SMOTE (Synthetic Minority Over-sampling Technique)**: In this scenario, we apply SMOTE to the training data to generate synthetic examples of the minority class. 3. **Class Weights**: This approach involves adjusting the importance of classes during model training, focusing on the minority class weight while keeping the majority class weight at 1. 4. **Decision Threshold Calibration**: In this scenario, we adjust the classification threshold post-training to optimize the model's performance on imbalanced data. Each scenario implements only one treatment method in isolation. We do not combine treatments across scenarios. Specifically: - For scenarios 1, 2, and 3, we apply the default decision threshold of 0.5. - For scenarios 1, 2, and 4, the class weights are set to 1.0 for both positive and negative classes. - SMOTE is applied only in scenario 2, class weight adjustment only in scenario 3, and decision threshold calibration only in scenario 4. This approach allows us to assess the individual impact of each treatment method on handling class imbalance.--DIVIDER-- <h2> Hyperparameter Tuning </h2> For scenarios requiring hyperparameter tuning (SMOTE, Class Weights, and Decision Threshold), we employed a simple grid search strategy to maximize the F1-score measured on the single validation split (10% of the training data) for each fold. The grid search details for the three treatment scenarios were as follows: <h3> SMOTE </h3> We tuned the number of neighbors hyperparameter, performing a simple grid search over `k` values of 1, 3, 5, 7, and 9. <br/><br/> <h3> Class Weights </h3> In this scenario, we adjusted the class weights to handle class imbalance during model training. The tuning process involved adjusting the weight for the minority class relative to the majority class. If both classes were given equal weights (e.g., 1 and 1), no class imbalance handling was applied—this corresponds to the baseline scenario. For the balanced scenario, we set the minority class weight proportional to the class imbalance (e.g., if the majority/minority class ratio was 5:1, the weight for the minority class would be 5). We conducted grid search on the following factors: 0 (baseline case), 0.25, 0.5, 0.75, 1.0 (balanced), and 1.25 (over-correction). The optimal weight was selected based on the F1-score on the validation split. <br/><br/> <h3> Decision Threshold Calibration </h3> We tuned the threshold parameter from 0.05 to 0.5 with a step size of 0.05, allowing for a wide range of potential decision boundaries. --DIVIDER--:::info{title="Info"} There are no scenario-specific hyperparameters to tune for the Baseline scenario. As a result, no train/validation split was needed, and the entire training set was used for model training. :::--DIVIDER--<h2> Overall Scope of Experiments </h2> Overall, this study contains 9,000 experiments driven by the following factors: - 30 datasets - 15 models - 4 scenarios - 5-fold cross-validation For each experiment, we recorded the 10 performance metrics across the five test splits. In the following sections, we present the results of these extensive experiments.--DIVIDER--# Results This section presents a comprehensive analysis of our experiments comparing four strategies for handling class imbalance in binary classification tasks. We begin with an overall comparison of the four scenarios (Baseline, SMOTE, Class Weights, and Decision Threshold Calibration) across all ten evaluation metrics. Following this, we focus on the F1-score metric to examine performance across the 15 classifier models and 30 datasets used in our study. Our analysis is structured as follows: 1. Overall performance comparison by scenario and metric 2. Model-specific performance on F1-score 3. Dataset-specific performance on F1-score 4. Statistical analysis, including repeated measures tests and post-hoc pairwise comparisons For the overall, model-specific, and dataset-specific analyses, we report mean performance and standard deviations across the five test splits from our cross-validation procedure. The final section presents the results of our statistical tests, offering a rigorous comparison of the four scenarios' effectiveness in handling class imbalance. --DIVIDER--## Overall Comparison Figure 1 presents the mean performance and standard deviation for all 10 evaluation metrics across the four scenarios: Baseline, SMOTE, Class Weights, and Decision Threshold Calibration.--DIVIDER-- ![overall_results.svg](overall_results.svg) _Figure 1: Mean performance and standard deviation of evaluation metrics across all scenarios. Best values per metric are highlighted in blue._--DIVIDER--The results represent aggregated performance across all 15 models and 30 datasets, providing a comprehensive overview of the effectiveness of each scenario in handling class imbalance. --DIVIDER--<h2> F1-Score Performance </h2> The results show that all three class imbalance handling techniques outperform the Baseline scenario in terms of F1-score: 1. Decision Threshold Calibration achieved the highest mean F1-score (0.617 ± 0.005) 2. SMOTE followed closely (0.605 ± 0.006) 3. Class Weights showed improvement over Baseline (0.594 ± 0.006) 4. Baseline had the lowest F1-score (0.556 ± 0.006) This suggests that addressing class imbalance, regardless of the method, generally improves model performance as measured by the F1-score. <h2> Other Metrics </h2> While our analysis primarily focuses on the F1-score, it's worth noting observations from the other metrics: - **F2-score and Recall**: Decision Threshold Calibration and SMOTE showed the highest performance, indicating these methods are particularly effective at improving the model's ability to identify the minority class. - **Precision**: The Baseline scenario achieved the highest precision, suggesting a more conservative approach in predicting the minority class. - **MCC (Matthews Correlation Coefficient)**: SMOTE and Decision Threshold Calibration tied for the best performance, indicating a good balance between true and false positives and negatives. - **PR-AUC and AUC**: These metrics showed relatively small differences across scenarios. Notably, SMOTE and Class Weights did not deteriorate performance on these metrics compared to the Baseline. As expected, Decision Threshold Calibration, being a post-model adjustment, does not materially impact these probability-based metrics (as well as Brier-Score). - **Accuracy**: The Baseline scenario achieved the highest accuracy, which is common in imbalanced datasets where high accuracy can be achieved despite poor minority class detection. - **Log-Loss**: The Baseline scenario performed best, suggesting it produces the most well-calibrated probabilities. SMOTE showed the highest log-loss, indicating potential issues with probability calibration. - **Brier-Score**: As expected, the Baseline and Decision Threshold scenarios show identical performance, as Decision Threshold Calibration is a post-prediction adjustment and doesn't affect the underlying probabilities used in the Brier Score calculation. Notably, SMOTE performed significantly worse on this metric, indicating it produces poorly calibrated probabilities compared to the other scenarios. Based on these observations, Decision Threshold Calibration demonstrates strong performance across several key metrics, particularly those focused on minority class prediction (F1-score, F2-score, and Recall). It achieves this without compromising the calibration of probabilities of the baseline model, as evidenced by the identical Brier Score. In contrast, while SMOTE improves minority class detection, it leads to the least well-calibrated probabilities, as shown by its poor Brier Score. This suggests that Decision Threshold Calibration could be particularly effective in scenarios where accurate identification of the minority class is crucial, while still maintaining the probability calibration of the original model. For the rest of this article, we will focus on the F1-score due to its balanced representation of precision and recall, which is particularly important in imbalanced classification tasks. --DIVIDER-- ## Results by Model Figure 2 presents the mean F1-scores and standard deviations for each of the 15 models across the four scenarios. Each model's scores are averaged across the 30 datasets. --DIVIDER-- ![by_model_f1_results.svg](by_model_f1_results.svg) _Figure 2: Mean F1-scores and standard deviations for each model across the four scenarios. Highest values per model are highlighted in blue._--DIVIDER-- Key observations from these results include: 1. **Scenario Comparison**: For each model, we compared the performance of the four scenarios (Baseline, SMOTE, Class Weights, and Decision Threshold Calibration). This within-model comparison is more relevant than comparing different models to each other, given the diverse nature of the classifier techniques. </br> 2. **Decision Threshold Performance**: The Decision Threshold Calibration scenario achieved the highest mean F1-score in 10 out of 15 models. Notably, even when it wasn't the top performer, it consistently remained very close to the best scenario for that model. </br> 3. **Other Scenarios**: Within individual models, Class Weights performed best in 3 cases, while SMOTE and Baseline each led in 1 case. </br> 4. **Consistent Improvement**: All three imbalance handling techniques generally showed improvement over the Baseline scenario across most models, with 1 exception. </br> These results indicate Decision Threshold Calibration was most frequently the top performer across the 15 models. This suggests that post-model adjustments to the decision threshold is a robust strategy for improving model performance across different classifier techniques. However, the strong performance of other techniques in some cases underscores the importance of testing multiple approaches when dealing with imbalanced datasets in practice. --DIVIDER-- ## Results by Dataset Figure 3 presents the mean F1-scores and standard deviations for each of the 30 datasets across the four scenarios.--DIVIDER-- ![by_dataset_f1_results.svg](by_dataset_f1_results.svg) _Figure 3: Mean F1-scores and standard deviations for each dataset across the four scenarios. Highest values per dataset are highlighted in blue._ --DIVIDER--:::info{title="Info"} These results are aggregated across the 15 models for each dataset. While this provides insights into overall trends, in practice, one would typically seek to identify the best model-scenario combination for a given dataset under consideration. :::--DIVIDER--Key observations from these results include: 1. **Variability**: There is substantial variability in which scenario performs best across different datasets, highlighting that there is no one-size-fits-all solution for handling class imbalance. 2. **Scenario Performance**: - Decision Threshold Calibration was best for 12 out of 30 datasets (40%) - SMOTE was best for 9 datasets (30%) - Class Weights was best for 7 datasets (23.3%) - Baseline was best for 3 datasets (10%) - There was one tie between SMOTE and Class Weights 3. **Improvement Magnitude**: The degree of improvement over the Baseline varies greatly across datasets, from no improvement to substantial gains (e.g., satellite vs abalone_binarized). 4. **Benefit of Imbalance Handling**: While no single technique consistently outperformed others across all datasets, the three imbalance handling strategies generally showed improvement over the Baseline for most datasets. These results underscore the importance of testing multiple imbalance handling techniques for each specific dataset and task, rather than relying on a single approach. The variability observed suggests that the effectiveness of each method may depend on the unique characteristics of each dataset. --DIVIDER--:::info{title="Info"} One notable observation is the contrast between these dataset-level results and the earlier model-level results. While the model-level analysis suggested Decision Threshold Calibration as a generally robust approach, the dataset-level results show much more variability. This apparent discrepancy highlights the complexity of handling class imbalance and suggests that the effectiveness of different techniques may be more dependent on dataset characteristics than on model type. :::--DIVIDER-- ## Statistical Analysis To rigorously compare the performance of the four scenarios, we conducted statistical tests on the F1-scores aggregated by dataset (averaging across the 15 models for each dataset). <h2> Repeated Measures ANOVA </h2> We performed a repeated measures ANOVA to test for significant differences among the four scenarios. For this test, we have 30 datasets, each with four scenario F1-scores, resulting in 120 data points. The null hypothesis is that there are no significant differences among the mean F1-scores of the four scenarios. We use Repeated Measures ANOVA to account because we have multiple measurements (scenarios) for each dataset. - **Result**: The test yielded a p-value of 2.01e-07, which is well below our alpha level of 0.05. - **Interpretation**: This result indicates statistically significant differences among the mean F1-scores of the four scenarios. <h2> Post-hoc Pairwise Comparisons </h2> Following the significant ANOVA result, we conducted post-hoc pairwise comparisons using a Bonferroni correction to adjust for multiple comparisons. With 6 comparisons, our adjusted alpha level is 0.05/6 = 0.0083. The p-values for the pairwise comparisons are presented in Table 1. **Table 1: P-values for pairwise comparisons (Bonferroni-corrected)** | Scenario | Class Weights | Decision Threshold | SMOTE | | ------------------ | ------------- | ------------------ | -------- | | Baseline | 7.77e-05 | 2.26e-04 | 1.70e-03 | | Class Weights | - | 2.06e-03 | 1.29e-01 | | Decision Threshold | - | - | 2.83e-02 | Key findings from the pairwise comparisons: 1. The Baseline scenario is significantly different from all other scenarios (p < 0.0083 for all comparisons). 2. Class Weights is significantly different from Baseline and Decision Threshold, but not from SMOTE. 3. There is no significant difference between SMOTE and Decision Threshold, or between SMOTE and Class Weights at the adjusted alpha level. These results suggest that while all three imbalance handling techniques (SMOTE, Class Weights, and Decision Threshold) significantly improve upon the Baseline, the differences among these techniques are less pronounced. The Decision Threshold approach shows a significant improvement over Baseline and Class Weights, but not over SMOTE, indicating that both Decision Threshold and SMOTE may be equally effective strategies for handling class imbalance in many cases. --DIVIDER-- # Discussion of Results <h2> Key Findings and Implications </h2> Our comprehensive study on handling class imbalance in binary classification tasks yielded several important insights: 1. **Addressing Class Imbalance**: Our results strongly suggest that handling class imbalance is crucial for improving model performance. Across most datasets and models, at least one of the imbalance handling techniques outperformed the baseline scenario, often by a significant margin. <br/><br/> 2. **Effectiveness of SMOTE**: SMOTE demonstrated considerable effectiveness in minority class detection, showing significant improvements over the baseline in many cases. It was the best-performing method for 30% of the datasets, indicating its value as a class imbalance handling technique. However, it's important to note that while SMOTE improved minority class detection, it also showed the worst performance in terms of probability calibration, as evidenced by its high Log-Loss and Brier Score. This suggests that while SMOTE can be effective for improving classification performance, it may lead to less reliable probability estimates. Therefore, its use should be carefully considered in applications where well-calibrated probabilities are crucial. <br/><br/> 3. **Optimal Method**: Decision Threshold Calibration emerged as the most consistently effective technique, performing best for 40% of datasets and showing robust performance across different model types. It's also worth noting that among the three methods studied, Decision Threshold Calibration is the least computationally expensive. Given its robust performance and efficiency, it could be considered a strong default choice for practitioners dealing with imbalanced datasets. <br/><br/> 4. **Variability Across Datasets**: Despite the overall strong performance of Decision Threshold Calibration, we observed substantial variability in the best-performing method across datasets. This underscores the importance of testing multiple approaches for each specific problem. <br/><br/> 5. **Importance of Dataset-Level Analysis**: Unlike many comparative studies on class imbalance that report results at the model level aggregated across datasets, our study emphasizes the importance of dataset-level analysis. We found that the best method can vary significantly depending on the dataset characteristics. This observation highlights the necessity of analyzing and reporting findings at the dataset level to provide a more nuanced and practical understanding of imbalance handling techniques. --DIVIDER-- <h2> Study Limitations and Future Work </h2> While our study provides valuable insights, it's important to acknowledge its limitations: 1. **Fixed Hyperparameters**: We used previously determined model hyperparameters. Future work could explore the impact of optimizing these hyperparameters specifically for imbalanced datasets. For instance, adjusting the maximum depth in tree models might allow for better modeling of rare classes. <br/><br/> 2. **Statistical Analysis**: Our analysis relied on repeated measures ANOVA and post-hoc tests. A more sophisticated approach, such as a mixed-effects model accounting for both dataset and model variability simultaneously, could provide additional insights and is an area for future research. <br/><br/> 3. **Dataset Characteristics**: While we observed variability in performance across datasets, we didn't deeply analyze how specific dataset characteristics (e.g., sample size, number of features, degree of imbalance) might influence the effectiveness of different methods. Future work could focus on identifying patterns in dataset characteristics that predict which imbalance handling technique is likely to perform best. <br/><br/> 4. **Limited Scope of Techniques**: Our study focused on three common techniques for handling imbalance. Future research could expand this to include other methods or combinations of methods. <br/><br/> 5. **Performance Metric Focus**: While we reported multiple metrics, our analysis primarily focused on F1-score. Different applications might prioritize other metrics, and the relative performance of these techniques could vary depending on the chosen metric. These limitations provide opportunities for future research to further refine our understanding of handling class imbalance in binary classification tasks. Despite these limitations, our study offers valuable guidance for practitioners and researchers dealing with imbalanced datasets, emphasizing the importance of addressing class imbalance and providing insights into the relative strengths of different approaches. --DIVIDER--# Conclusion Our study provides a comprehensive evaluation of three widely used strategies—SMOTE, Class Weights, and Decision Threshold Calibration—for handling imbalanced datasets in binary classification tasks. Compared to a baseline scenario where no intervention was applied, all three methods demonstrated substantial improvements in key metrics related to minority class detection, particularly the F1-score, across a wide range of datasets and machine learning models. The results show that addressing class imbalance is crucial for improving model performance. Decision Threshold Calibration emerged as the most consistent and effective technique, offering significant performance gains across various datasets and models. SMOTE also performed well, and Class Weights tuning proved to be a reasonable method for handling class imbalance, showing moderate improvements over the baseline. However, the variability in performance across datasets highlights that no single method is universally superior. Therefore, practitioners should consider testing multiple approaches and tuning them based on their specific dataset characteristics. While our study offers valuable insights, certain areas could be explored in future research. We fixed the hyperparameters across scenarios to ensure fair comparisons, holding all factors constant except for the treatment. Future research could investigate optimizing hyperparameters specifically for imbalanced datasets. Additionally, further work could explore how specific dataset characteristics influence the effectiveness of different techniques. Expanding the scope to include other imbalance handling methods or combinations of methods would also provide deeper insights. While our primary analysis focused on the F1-score, results for other metrics are available, allowing for further exploration and custom analyses based on different performance criteria. In conclusion, our findings emphasize the importance of addressing class imbalance and offer guidance on choosing appropriate techniques based on dataset and model characteristics. Decision Threshold Calibration, with its strong and consistent performance, can serve as a valuable starting point for practitioners dealing with imbalanced datasets, but flexibility and experimentation remain key to achieving the best results.--DIVIDER--# Additional Resources To support the reproducibility of our study and provide further value to researchers and practitioners, we have made several resources publicly available: 1. **Model Repositories**: Implementations of all 15 models used in this study are available in separate repositories. These can be accessed through links provided in the "Models" section of this publication.<br><br> 2. **Dataset Repository**: The 30 datasets used in our study are available in a GitHub repository titled "30 Imbalanced Classification Study Datasets". This repository includes detailed information about each dataset's characteristics and sources. - GitHub link: [https://github.com/readytensor/rt-datasets-binary-class-imbalance](https://github.com/readytensor/rt-datasets-binary-class-imbalance) 3. **Results Repository**: A comprehensive collection of our study results is available in a GitHub repository titled "Imbalanced Classification Results Analysis". This includes detailed performance metrics and analysis scripts. - GitHub link: [https://github.com/readytensor/rt-binary-class-imbalance-results](https://github.com/readytensor/rt-binary-class-imbalance-results) 4. **Hyperparameters**: The hyperparameters used in the experiment are listed in the **`hyperparmeters.csv`** file in the "Resources" section. All project work is open-source, encouraging further exploration and extension of our research. We welcome inquiries and feedback from the community. For any questions or discussions related to this study, please contact the authors at [email protected]. We encourage researchers and practitioners to utilize these resources to validate our findings, conduct further analyses, or extend this work in new directions.--DIVIDER--# References 1. Batista, G.E., Prati, R.C., & Monard, M.C. (2004). A study of the behavior of several methods for balancing machine learning training data. _ACM SIGKDD Explorations Newsletter_, 6(1), 20-29. 2. Blagus, R., & Lusa, L. (2013). SMOTE for high-dimensional class-imbalanced data. _BMC Bioinformatics_, 14(1), 106. 3. Bunkhumpornpat, C., Sinapiromsaran, K., & Lursinsap, C. (2009). Safe-level-SMOTE: Safe-level-synthetic minority over-sampling technique for handling the class imbalanced problem. In _Advances in Knowledge Discovery and Data Mining: 13th Pacific-Asia Conference, PAKDD 2009 Bangkok, Thailand, April 27-30, 2009 Proceedings_ (pp. 475-482). Springer Berlin Heidelberg. 4. Chawla, N.V., Bowyer, K.W., Hall, L.O., & Kegelmeyer, W.P. (2002). SMOTE: Synthetic minority over-sampling technique. _Journal of Artificial Intelligence Research_, 16, 321-357. 5. Elor, Y., & Averbuch-Elor, H. (2022). To SMOTE, or not to SMOTE? _arXiv preprint arXiv:2201.08528_. 6. Han, H., Wang, W.Y., & Mao, B.H. (2005, August). Borderline-SMOTE: A new over-sampling method in imbalanced data sets learning. In _International Conference on Intelligent Computing_ (pp. 878-887). Berlin, Heidelberg: Springer Berlin Heidelberg. 7. Kovács, G. (2019). SMOTE-variants: A Python implementation of 85 minority oversampling techniques. _Neurocomputing_, 366, 352-354. 8. Van Hulse, J., Khoshgoftaar, T.M., & Napolitano, A. (2007, June). Experimental perspectives on learning from imbalanced data. In _Proceedings of the 24th International Conference on Machine Learning_ (pp. 935-942).
v2pswk4Vf2Bq
ready-tensor
cc-by
Repeatability Is Not Reproducibility: Why AI Research Needs a Higher Bar
![repeatability-reproducibility2.webp](repeatability-reproducibility2.webp)--DIVIDER--# TL;DR Many AI/ML papers claim "reproducibility" by offering a GitHub repo that regenerates their results - but that’s just repeatability, not true validation. In our AI Magazine paper, we explain why reproducibility requires independent teams to verify correctness, and replicability requires testing whether findings hold under different conditions. To advance science, we need to move beyond automated re-runs and toward deeper, more rigorous validation.--DIVIDER--# Wait, What Even Is Reproducibility? Our team recently published a peer-reviewed paper in [AI Magazine](https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70004) titled ["What is Reproducibility in Artificial Intelligence and Machine Learning Research?"](https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70004). The paper tackles a growing problem in AI/ML research: even as the field calls for greater reproducibility, we don’t actually agree on what that word means. When we began working on the paper, we set out to offer suggestions for how researchers could improve reproducibility. But we quickly realized we needed to take a step back. The term “reproducibility” is used in so many different ways across papers, conferences, and fields that it has become confusing. This confusion isn’t just technical - it’s a barrier to real scientific progress. --DIVIDER--# The Rise of One-Click Pipelines You’ve probably seen this before: a paper claims its results are “fully reproducible” and links to a GitHub repo. The repo contains a script that automatically regenerates all the tables and charts from the paper, exactly as they appeared. The **sharing of implementation code is absolutely commendable**. It enables transparency and allows others to examine and test the work. But let’s recognize that this alone is not reproducibility. It’s **repeatability**. Repeatability means that the same experiment, run in the same way, produces the same results. Even if someone else clicks the button to run the script, they’re still just re-running the original pipeline. That doesn’t tell us whether the experiment was **implemented correctly** or whether the **conclusions are valid**. The real problem is the implication: “Look, the script runs and produces the same results as in the paper. So, everything checks out.” That kind of automation, while well-intentioned, can give a **false sense of validation** and unintentionally discourage critical scrutiny. To be clear: these fully-automated, end-to-end pipelines are helpful for authors in regenerating their own results. But when shared as proof of validation, they fall short.--DIVIDER--# Repeatable? Sure. Reliable? Not So Fast. True **reproducibility** means that an independent team, one not involved in the original study, engages with the original design to validate whether the findings truly hold. This might involve re-implementing the study from scratch, or using the original code, but in both cases, the goal is the same: to carefully examine the correctness of the implementation and confirm that the results are not the product of hidden flaws. And beyond that, we also need to test whether findings hold under slightly different setups. That’s called **replicability**, and it comes in two forms: - **Direct replicability**: The experiment is implemented differently, but the design remains the same - for example, using a different dataset or algorithmic variant. - **Conceptual replicability**: The experiment design itself changes, but it still tests the same core hypothesis. Each of these adds a layer of validation. Repeatability checks if results can be regenerated. Reproducibility checks if the implementation was correct. Replicability checks if the findings generalize.--DIVIDER--# What Real Reproducibility Looks Like We believe reproducibility isn't about convenience - it's about verification. To move the field forward, we need tools and practices that support real investigation, not just re-execution. That’s why we’re not just asking researchers to make their code public, we’re asking them to make it **useful for real investigation**. That means: - Sharing not just code, but **modular, well-documented code** that others can understand and build on. - Allowing others to **swap datasets**, adjust hyperparameters, change analysis steps, or test the impact of individual components. - Supporting **open-ended exploration**, not just automated re-execution. Reproducibility shouldn’t be a checkbox - it should be a discipline. One that’s built into how we conduct and share our work: with modular code, transparent design, and clear documentation that allows others to test, validate, and build upon it.--DIVIDER--# Read Our Full Paper Desai, Abhyuday, Mohamed Abdelhamid, and Nakul R. Padalkar. ["What is reproducibility in artificial intelligence and machine learning research?."](https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.70004) AI Magazine 46, no. 2 (2025): e70004.
wm6Jm52Y5pBu
ready-tensor
cc-by-sa
Beyond tracemalloc: A Comprehensive Resource Tracker for Python
![ram.png](ram.png)--DIVIDER--# Introduction In the world of Python programming, especially in data science and machine learning, efficient memory management is crucial. As projects grow in complexity and scale, understanding and optimizing memory usage becomes increasingly important. However, tracking memory consumption in Python can be surprisingly tricky, particularly when working with libraries like NumPy and PyTorch that manage their own memory allocations. The built-in `tracemalloc` module in Python, while useful for many scenarios, falls short when dealing with these specialized libraries. This limitation can lead to significant underestimation of memory usage, potentially causing unexpected out-of-memory errors or suboptimal resource allocation. In this publication, we'll explore the challenges of accurate memory tracking in Python, demonstrate why common solutions like `tracemalloc` are insufficient for complex scenarios, and introduce a comprehensive resource tracking solution. This custom implementation not only addresses the shortcomings of standard memory profilers but also provides a more holistic view of resource usage, including CPU and GPU memory, as well as execution time. Whether you're optimizing machine learning models, processing large datasets, or simply trying to understand the resource footprint of your Python applications, this resource tracker offers valuable insights that can help you write more efficient and reliable code.--DIVIDER--# Problem with tracemalloc The `tracemalloc` module, introduced in Python 3.4, is often the go-to solution for tracking memory allocation in Python programs. However, it has significant limitations when dealing with libraries that manage their own memory, such as NumPy and PyTorch. Let's examine this issue with a simple experiment: ```python import torch import tracemalloc import numpy as np def get_memory_usage(obj): samples = int(1e8) tracemalloc.start() if obj == "np": x = np.zeros((samples, 1)).astype("float64") elif obj == "torch": x = torch.ones(samples, 1).to(torch.float64) elif obj == "list": x = [0.0] * samples _, peak_usage = tracemalloc.get_traced_memory() tracemalloc.stop() return round(peak_usage / (1024**2), 3) print("Numpy memory usage", get_memory_usage("np")) print("PyTorch memory usage", get_memory_usage("torch")) print("Python list usage", get_memory_usage("list")) ``` This code creates three different objects of roughly similar size: a NumPy array, a PyTorch tensor, and a Python list. Each object contains 100 million elements of type float64. We then use `tracemalloc` to measure the peak memory usage for each object creation. The output of this code is surprising: ``` Numpy memory usage 1525.879 PyTorch memory usage 0.019 Python list memory usage 762.939 ``` These results reveal a glaring inconsistency: 1. The NumPy array shows about 1525 MB of memory usage. 2. The PyTorch tensor shows nearly zero memory usage. 3. The Python list shows about 763 MB of memory usage. In reality, each of these objects should occupy approximately the same amount of memory - around 763 MB $$ MB = 10^8 * \frac{64}{8*1024*1024} \ $$ The discrepancies arise because `tracemalloc` only tracks memory allocations made by Python itself, not those made by external libraries using their own memory management systems. This inconsistency poses several problems: 1. Underestimation of memory usage: For libraries like PyTorch, `tracemalloc` severely underreports memory consumption, potentially leading to unexpected out-of-memory errors. <br> <br> 2. Overestimation in some cases: For NumPy, `tracemalloc` seems to overestimate the memory usage, which could lead to overly conservative resource allocation. <br> <br> 3. Inconsistent profiling: The varying results make it difficult to accurately compare memory usage across different parts of a program that use different libraries. <br> These limitations highlight the need for a more comprehensive resource tracking solution, especially for projects that heavily rely on numerical computing and machine learning libraries. In the following sections, we'll introduce a custom resource tracker that addresses these issues and provides a more accurate and holistic view of memory usage in Python applications. --DIVIDER--# Introducing the ResourceTracker To address the limitations of `tracemalloc` and provide a more accurate and comprehensive view of resource usage, we've developed the `ResourceTracker`. This custom implementation offers a robust solution for monitoring memory usage and execution time across various Python libraries and hardware resources. ## Key Features of the ResourceTracker 1. **Multi-faceted Memory Tracking**: Unlike `tracemalloc`, our `ResourceTracker` uses multiple methods to capture memory usage: - Python memory via `tracemalloc` - System RAM usage through `psutil` - GPU memory for CUDA-enabled devices <br><br> 2. **Continuous Monitoring**: Instead of just capturing snapshots, the `ResourceTracker` continuously monitors memory usage, ensuring that peak usage is accurately recorded. <br><br> 3. **GPU Support**: For machine learning applications, the tracker includes GPU memory monitoring, a critical feature missing in standard Python profiling tools.<br><br> 4. **Execution Time Measurement**: Along with memory usage, the tracker also measures the execution time of the code block it's monitoring.<br><br> 5. **Easy Integration**: Implemented as a context manager, the `ResourceTracker` can be easily integrated into existing code with minimal changes. Let's take a closer look at the main components of the `ResourceTracker`: ```python import time import psutil import threading import tracemalloc import torch import os import numpy as np class ResourceTracker(object): """ This class serves as a context manager to track time and memory allocated by code executed inside it. """ def __init__(self, logger, monitoring_interval): self.logger = logger self.monitor = MemoryMonitor(logger=logger, interval=monitoring_interval) def __enter__(self): self.start_time = time.time() tracemalloc.start() self.monitor.start() return self def __exit__(self, exc_type, exc_value, traceback): self.end_time = time.time() self.monitor.stop() _, peak = tracemalloc.get_traced_memory() tracemalloc.stop() elapsed_time = self.end_time - self.start_time peak_python_memory_mb = peak / 1024**2 process_cpu_peak_memory_mb = self.monitor.get_peak_memory_usage() gpu_peak_memory_mb = self.get_peak_gpu_memory_usage() self.logger.info(f"Execution time: {elapsed_time:.2f} seconds") self.logger.info( f"Peak Python Allocated Memory: {peak_python_memory_mb:.2f} MB" ) self.logger.info( f"Peak CUDA GPU Memory Usage (Incremental): {gpu_peak_memory_mb:.2f} MB" ) self.logger.info( f"Peak System RAM Usage (Incremental): {process_cpu_peak_memory_mb:.2f} MB" ) def get_peak_gpu_memory_usage(self): """ Returns the peak memory usage by current cuda device (in MB) if available """ if not torch.cuda.is_available(): return 0 current_device = torch.cuda.current_device() peak_memory = torch.cuda.max_memory_allocated(current_device) return peak_memory / (1024 * 1024) ``` The `ResourceTracker` class serves as a context manager, starting the monitoring process when entered and collecting and logging the results when exited. It utilizes the `MemoryMonitor` class for continuous memory tracking: ```python class MemoryMonitor: initial_cpu_memory = None peak_cpu_memory = 0 # Class variable to store peak memory usage def __init__(self, interval=0.1, logger=print): self.interval = interval self.logger = logger or print self.running = False self.thread = threading.Thread(target=self.monitor_loop) def monitor_memory(self): process = psutil.Process(os.getpid()) total_memory = process.memory_info().rss # Check if the current memory usage is a new peak and update accordingly self.peak_cpu_memory = max(self.peak_cpu_memory, total_memory) if self.initial_cpu_memory is None: self.initial_cpu_memory = self.peak_cpu_memory def monitor_loop(self): """Runs the monitoring process in a loop.""" while self.running: self.monitor_memory() time.sleep(self.interval) def start(self): """Starts the memory monitoring.""" if not self.running: self.running = True self.thread.start() def stop(self): """Stops the periodic monitoring""" self.running = False self.thread.join() # Wait for the monitoring thread to finish def get_peak_memory_usage(self): # Convert both CPU and GPU memory usage from bytes to megabytes incremental_cpu_peak_memory = ( self.peak_cpu_memory - self.initial_cpu_memory ) / (1024**2) return incremental_cpu_peak_memory @classmethod def get_peak_memory(cls): """Returns the peak memory usage""" return cls.peak_cpu_memory ``` The `MemoryMonitor` runs in a separate thread, periodically checking and updating the peak memory usage. By combining these components, the `ResourceTracker` provides a comprehensive view of resource usage, addressing the inconsistencies we observed with `tracemalloc` and offering additional insights into GPU memory usage and execution time. In the next section, we'll demonstrate how to use the `ResourceTracker` in practice and compare its results with our earlier `tracemalloc` examples. :::info{title="Important Note"} The following method is specific to PyTorch. You may want to update it if you are working with other libraries like TensorFlow. ```python def get_peak_gpu_memory_usage(self): """ Returns the peak memory usage by current cuda device (in MB) if available """ if not torch.cuda.is_available(): return 0 current_device = torch.cuda.current_device() peak_memory = torch.cuda.max_memory_allocated(current_device) return peak_memory / (1024 * 1024) ``` :::--DIVIDER--# Putting ResourceTracker to the Test Now that we've introduced the ResourceTracker, let's see how it performs in practice with a more demanding scenario. We'll use it to measure memory usage for large data structures, allowing us to demonstrate its accuracy and comprehensiveness in real-world situations. Here's our test function using the ResourceTracker: ```python import logging logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) def measure_with_resource_tracker(obj_type): with ResourceTracker(logger, monitoring_interval=0.001): samples = int(1e8) time.sleep(1) if obj_type == "list": x = [0.0] * samples elif obj_type == "np": x = np.zeros((samples, 1)).astype("float64") elif obj_type == "torch_cpu": x = torch.ones(samples, 1).to(torch.float64) elif obj_type == "torch_gpu": x = torch.ones(samples, 1).to(torch.float64).cuda() print("--" * 10) print('Python list') measure_with_resource_tracker("list") print('Numpy array') measure_with_resource_tracker("np") print('PyTorch CPU') measure_with_resource_tracker("torch_cpu") print('PyTorch GPU') measure_with_resource_tracker("torch_gpu") ``` This function creates a data structure with 100 million elements (1e8) of type float64, which should theoretically occupy about 763 MB of memory. Let's analyze the results for each case: ``` Python list INFO:__main__:Execution time: 1.99 seconds INFO:__main__:Peak Python Allocated Memory: 763.03 MB INFO:__main__:Peak CUDA GPU Memory Usage (Incremental): 0.00 MB INFO:__main__:Peak System RAM Usage (Incremental): 763.12 MB -------------------- Numpy array INFO:__main__:Execution time: 1.47 seconds INFO:__main__:Peak Python Allocated Memory: 1525.96 MB INFO:__main__:Peak CUDA GPU Memory Usage (Incremental): 0.00 MB INFO:__main__:Peak System RAM Usage (Incremental): 762.95 MB -------------------- PyTorch CPU INFO:__main__:Execution time: 1.86 seconds INFO:__main__:Peak Python Allocated Memory: 0.08 MB INFO:__main__:Peak CUDA GPU Memory Usage (Incremental): 0.00 MB INFO:__main__:Peak System RAM Usage (Incremental): 1145.65 MB -------------------- PyTorch GPU INFO:__main__:Execution time: 2.29 seconds INFO:__main__:Peak Python Allocated Memory: 0.09 MB INFO:__main__:Peak CUDA GPU Memory Usage (Incremental): 762.94 MB INFO:__main__:Peak System RAM Usage (Incremental): 1141.33 MB -------------------- ``` Let's analyze these results: 1. Python list: - Both the Python Allocated Memory and System RAM Usage are close to `763 MB` which is the expected number. 2. NumPy array: - The ResourceTracker shows a peak Python Allocated Memory of `1525.96 MB` and a System RAM Usage of `762.95 MB`. - This is higher than the expected `763 MB`, likely due to memory overhead in NumPy's allocation strategy and potential temporary allocations during array creation. <br><br> 3. PyTorch tensor (CPU): - Interestingly, the Peak Python Allocated Memory is only `0.08 MB`, while the System RAM Usage is `1145.65 MB`. - This demonstrates that PyTorch manages its own memory outside of Python's memory allocator, which ResourceTracker correctly captures in the System RAM Usage. <br><br> 4. PyTorch tensor (GPU): - The Peak Python Allocated Memory is very low at `0.09 MB`, similar to the CPU tensor case. - The System RAM Usage is `1141.33 MB`, which is close to the CPU tensor case. - Most importantly, we see a Peak CUDA GPU Memory Usage of `762.94 MB`. - This clearly demonstrates that PyTorch is allocating the tensor on the GPU, using CUDA memory. - The GPU memory usage (762.94 MB) is very close to the expected `763 MB` for our data. - This shows that ResourceTracker successfully captures GPU memory allocation, which is crucial for machine learning workloads using GPUs. - The similar System RAM usage to the CPU case might indicate some CPU-side overhead or memory mirroring that PyTorch performs even for GPU tensors. This last observation highlights the ResourceTracker's ability to monitor both CPU and GPU memory usage, providing a complete picture of resource utilization in deep learning scenarios. It accurately captures the shift of memory allocation from CPU to GPU when using CUDA-enabled PyTorch tensors, which is a significant advantage over simpler memory profiling tools. Key observations: 1. Accuracy: The ResourceTracker provides a much more accurate picture of memory usage compared to tracemalloc, especially for libraries like NumPy and PyTorch that manage their own memory.<br><br> 2. Comprehensive monitoring: It captures both Python-allocated memory and system RAM usage, providing a complete view of memory consumption.<br><br> 3. Execution time: The tracker also provides execution time for each operation, which includes the 1-second sleep we added.<br><br> 4. GPU monitoring: The tracker is capable of monitoring GPU memory usage when applicable.<br><br> 5. Continuous tracking: The low monitoring interval (0.001 seconds) ensures that we capture peak memory usage accurately, even for short-lived allocations. These results demonstrate that ResourceTracker successfully addresses the limitations of simpler memory profiling tools. It provides a more accurate and comprehensive view of resource usage across different Python libraries and data structures. This makes it an invaluable tool for developers working on memory-intensive applications, particularly in fields like data science and machine learning where efficient resource management is crucial. The ResourceTracker's ability to differentiate between Python-allocated memory and system RAM usage is particularly valuable when working with libraries like NumPy and PyTorch, which may use memory allocation strategies that aren't captured by Python's built-in memory profiling tools. --DIVIDER--:::warning{title="Warning"} When using the ResourceTracker, it is crucial to create only one ResourceTracker object and use it only once. This is because the ResourceTracker monitors global memory usage at the operating system level. Creating multiple ResourceTracker instances in the same script can lead to inaccurate and potentially misleading results. Each instance would independently track the global memory state, which could result in: 1. Double-counting of memory usage 2. Inconsistent baseline measurements 3. Difficulty in interpreting which memory changes are associated with which part of your code To avoid these issues, create a single ResourceTracker instance at the beginning of your script and wrap the entire code that you want to track inside your tracker Example of correct usage: ```python tracker = ResourceTracker(logger, monitoring_interval=0.001) # Use the same tracker instance for different parts of your code with tracker: # The entire code goes here pass ``` By adhering to this practice, you ensure that your memory usage measurements remain consistent and accurate throughout your application. :::--DIVIDER--# Summary In this publication, we explored the challenges of accurate memory tracking in Python, particularly when using libraries like NumPy and PyTorch that manage their own memory allocations. We demonstrated that the built-in `tracemalloc` module, while useful, often fails to capture true memory usage by these libraries, leading to underestimations or overestimations that can affect program performance and resource management. To address these limitations, we introduced a custom solution, the ResourceTracker. This tool enhances memory tracking by integrating multiple methods such as `tracemalloc`, `psutil` for system RAM tracking, and specific tracking for CUDA-enabled GPU devices. Unlike `tracemalloc`, ResourceTracker provides a comprehensive view by continuously monitoring memory usage, which ensures that peak usage is accurately recorded, and by measuring execution time, which adds another layer of analysis to resource management. Key features of ResourceTracker include: - Multi-faceted Memory Tracking: It combines Python memory tracking with system and GPU memory monitoring. - Continuous Monitoring: It updates memory usage continuously rather than just at snapshots. - Execution Time Measurement: It measures the total execution time of the monitored code block. - GPU Support: It supports memory tracking on GPU, crucial for machine learning applications. - Easy Integration: Implemented as a context manager, it allows for seamless integration into existing Python code. Through practical tests, ResourceTracker has proven to offer more accurate and detailed insights into memory usage compared to tracemalloc, particularly with high-memory-use libraries. It not only tracks Python-allocated memory but also captures system RAM usage, providing a holistic view of an application's resource consumption. This makes ResourceTracker an invaluable tool for developers working on complex data science and machine learning projects, where efficient and accurate resource management is critical.
WsaE5uxLBqnH
ready-tensor
cc-by-sa
Technical Excellence in AI/ML Publications: An Evaluation Rubric by Ready Tensor
![evaluation-rubric-hero.webp](evaluation-rubric-hero.webp) <div align="center"> <a href="https://www.freepik.com/free-vector/data-extraction-concept-illustration_12079896.htm#fromView=search&page=1&position=3&uuid=11dae826-208d-4ed7-82ff-a57bc0a5505d&query=AI+report">Image by storyset on Freepik</a> </div> --DIVIDER--# TL;DR This document presents a comprehensive evaluation rubric for assessing technical publications in AI and data science on Ready Tensor. The rubric evaluates publications through four fundamental questions: What is this about? (Purpose), Why does it matter? (Value/Impact), Can I trust it? (Technical Quality), and Can I use it? (Documentation). The system uses a binary scoring method (met/not met) across different criteria tailored to four main publication categories: Research & Academic Publications, Educational Content, Real-World Applications, and Technical Assets. Each category has specific requirements based on its purpose, with clear positive and negative indicators for objective assessment. The rubric serves multiple audiences: - Authors can use it to ensure their work meets quality standards - Reviewers can apply consistent evaluation criteria - Readers can understand what to expect from different publication types While meeting the rubric's criteria establishes baseline quality, exceptional publications often demonstrate unique insights, innovative approaches, or significant practical impact beyond these basic requirements.--DIVIDER--# 1. Introduction Technical publications in AI and data science need objective ways to assess their quality and effectiveness. Authors want to know if their publications meet quality standards. Readers want to know if a publication will serve their needs. Reviewers need consistent ways to evaluate submissions. This document presents an evaluation rubric that addresses these needs. The rubric examines each publication through four key questions: 1. What is this about? - Evaluates clarity of purpose and scope 2. Why does it matter? - Assesses significance and value to readers 3. Can I trust it? - Examines technical credibility and validation 4. Can I use it? - Measures practical usability and completeness By answering these questions systematically, the rubric provides clear criteria for measuring technical quality across different publication types. Authors can use it to create better publications. Reviewers can apply it for consistent evaluation.--DIVIDER--## 1.1 Purpose of the Rubric Publications on Ready Tensor serve diverse purposes - from advancing research to teaching concepts to documenting solutions. This evaluation rubric ensures each publication effectively serves its purpose by examining four key aspects: clarity of purpose, significance, technical credibility, and practical usability. Authors can use this rubric to understand what makes their publications effective. By addressing the core questions - what is this about, why does it matter, can I trust it, can I use it - authors ensure their work provides clear value to readers. The rubric uses a binary scoring system. Each criterion is marked as either met or not met based on specific evidence. This approach provides: - Objective measurement through clear evidence requirements - Consistent evaluation across different reviewers - Specific feedback on areas needing improvement - Easy verification of publication completeness For publication competitions, this rubric helps identify quality submissions. While meeting these criteria establishes baseline quality, exceptional publications often demonstrate unique insights, innovative approaches, or significant practical impact beyond the baseline requirements.--DIVIDER--## 1.2 Relationship to Publication Best Practices Guide from Ready Tensor The evaluation rubric works alongside the Ready Tensor Best Practices Guide: 1. [Best Practices Guide](https://app.readytensor.ai/publications/engage-and-inspire-best-practices-for-publishing-on-ready-tensor-SBgkOyUsP8qQ) - Focuses on how to present content effectively through clear writing, good organization, and visual elements 2. This Evaluation Rubric - Provides criteria and scoring methodology for evaluating technical quality, completeness, and effectiveness Authors should use both documents. Follow the Best Practices Guide for effective presentation while ensuring your work meets all rubric criteria.--DIVIDER-- ## 1.3 Using the Evaluation Rubric The evaluation rubric divides technical publications into different types, each with its own specific evaluation criteria. These publication types and their detailed criteria are covered in later sections of this document. The rubric uses binary scoring (met/not met) for individual criteria to provide a structured framework for evaluation. While total scores help indicate technical quality and completeness, they should not be used for direct comparisons between different publication types. For example, a research paper scoring 41 out of 45 criteria should not be compared with a tutorial scoring 16 out of 18 criteria, as they serve different purposes and are evaluated against different standards. Even within the same publication type, scores alone don't determine absolute quality rankings. A publication with a lower score might be more valuable due to unique insights or innovative approaches. The rubric should be viewed as a supportive tool for ensuring quality standards while recognizing that excellence can take many forms. --DIVIDER--:::info{title="A note on competitions"} While the evaluation criteria described in this publication help identify quality publications, exceptional work often goes beyond meeting basic requirements. Innovation, insight, and practical value play important roles in final evaluations. :::--DIVIDER--# 2. Evaluation Rubric Overview The evaluation rubric assesses publications by answering four fundamental questions that apply across all technical publications on Ready Tensor. --DIVIDER-- ## 2.1 Core Questions **What is this about? (Purpose)** Every publication must clearly state what readers will get from it. This means defining the scope, objectives, and intended outcomes up front. Purpose clarity helps readers immediately understand if the publication meets their needs. **Why does it matter? (Value/Impact)** Publications must establish their significance and value proposition. Readers should understand the practical, technical, or theoretical importance of the work. **Can I trust it? (Technical Quality)** All content must be technically sound. While the depth and nature of validation vary by publication type, technical accuracy and proper substantiation of claims are universal requirements. This ensures readers can confidently use or build upon the work. **Can I use it? (Documentation)** Content should be properly documented for its intended purpose. The type of documentation varies with publication type, but all content must provide sufficient information for readers to achieve the stated purpose.--DIVIDER-- ## 2.2 Binary Assessment Approach Evaluators apply criteria mapped to the four fundamental questions, with requirements appropriate to each publication type. A binary scoring system (met/not met) ensures clear, objective assessment: - Met: Clear evidence present in the publication - Not Met: Missing or inadequate evidence For a criterion to be met, evaluators must see clear evidence within the publication. For example, a "clear statement of purpose" needs an explicit purpose statement in the introduction. "Proper citation of sources" means all technical claims have specific references. When a criterion is not met, evaluators identify specific gaps or inadequacies, making it clear what authors need to improve. The rubric recognizes that publications serve different purposes. Success means effectively delivering value within the publication's intended scope. --DIVIDER--## 2.3 Evidence-Based Evaluation Evaluators assess criteria based on evidence found within the publication and its linked resources. Evidence must be verifiable - evaluators must be able to examine and validate claims directly. **Technical Documentation** Evaluators look for citations, equations, methodology descriptions, experimental results, and other technical content that substantiates claims and demonstrates rigor. Claims based on proprietary or closed-source methods require additional supporting evidence to be considered verified. **Visual Evidence** Diagrams, graphs, screenshots, and demo videos help communicate complex concepts and demonstrate real implementations. While visual evidence supports understanding, key technical claims must be backed by verifiable technical documentation. **Code Evidence** Code repositories, samples, installation instructions, and API documentation demonstrate implementation details and enable practical use. Open-source code allows direct verification of claims about functionality and performance. For closed-source tools, claims must be clearly scoped to what can be externally verified. **Data Evidence** Data repositories, files, and quality metrics provide concrete support for claims and enable result verification. Publicly accessible datasets allow direct validation. For proprietary datasets, publications must document data characteristics and quality measures that can be independently assessed. Each criterion is assessed as met (1 point) or not met (0 points) based on the presence and quality of verifiable evidence. The types of evidence required vary by publication type and specific criteria. Claims that cannot be verified through available evidence do not meet assessment criteria.--DIVIDER--## 2.4 Publication Type Adaptations The evaluation framework adapts its specific criteria to match the purpose of each publication type while maintaining the core questions. A research publication requires rigorous methodology and validation but may not need deployment guides. A tutorial needs clear step-by-step instructions but may not need statistical analysis. An industry case study demands business impact evidence but may not need mathematical proofs. For each publication type, criteria are selected to evaluate what matters most for that content's purpose and audience. Research publications focus on methodology, validation, and novel contributions. Educational content emphasizes clarity, completeness, and practical application. Industry publications prioritize real-world impact and implementation guidance. Technical asset documentation must demonstrate functionality and enable proper use. This adaptation ensures publications are evaluated fairly within their intended purpose. While all publications must answer our core questions - what is this about, why does it matter, can I trust it, can I use it - the evidence needed to answer these questions appropriately varies by type.--DIVIDER--# 3. Publication Types Publications on Ready Tensor fall into four main categories based on their primary purpose and target audience: 1. Research & Academic Publications - Present original research, methodology comparisons, and research explanations 2. Educational Content - Teach concepts, techniques, and best practices 3. Real-World Applications - Document industry solutions, case studies, and implementation guidance 4. Technical Assets - Share datasets, code, and tools with the community--DIVIDER--The following chart lists the common project types: ![publication-types.png](publication-types.png) --DIVIDER-- The following table describes each project type in detail, including the publication category, publication type, and a brief description along with examples: | Publication Category | Publication Type | Description | Examples | | -------------------------------- | ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------- | | Research & Academic Publications | Research Paper | Original research contributions presenting novel findings, methodologies, or analyses in AI/ML. Must include comprehensive literature review and clear novel contribution to the field. Demonstrates academic rigor through systematic methodology, experimental validation, and critical analysis of results. | • "Novel Attention Mechanism for Improved Natural Language Processing" <br>• "A New Framework for Robust Deep Learning in Adversarial Environments" | | Research & Academic Publications | Research Summary | Accessible explanations of specific research work(s) that maintain scientific accuracy while making the content more approachable. Focuses on explaining key elements and significance of original research rather than presenting new findings. Includes clear identification of original research and simplified but accurate descriptions of methodology. | • "Understanding GPT-4: A Clear Explanation of its Architecture" <br>• "Breaking Down the DALL-E 3 Paper: Key Innovations and Implications" | | Research & Academic Publications | Benchmark Study | Systematic comparison and evaluation of multiple models, algorithms, or approaches. Focuses on comprehensive evaluation methodology with clear performance metrics and fair comparative analysis. Includes detailed experimental setup and reproducible testing conditions. | • "Performance Comparison of Top 5 LLMs on Medical Domain Tasks" <br>• "Resource Utilization Study: PyTorch vs TensorFlow Implementations" | | Educational Content | Academic Solution Showcase | Projects completed as part of coursework, self-learning, or competitions that demonstrate application of AI/ML concepts. Focuses on learning outcomes and skill development using standard datasets or common ML tasks. Documents implementation approach and key learnings. | • "Building a CNN for Plant Disease Detection: A Course Project" <br>• "Implementing BERT for Sentiment Analysis: Kaggle Competition Entry" | | Educational Content | Blog | Experience-based articles sharing insights, tips, best practices, or learnings about AI/ML topics. Emphasizes practical knowledge and real-world perspectives based on personal or team experience. Includes authentic insights not found in formal documentation. | • "Lessons Learned from Deploying ML Models in Production" <br>• "5 Common Pitfalls in Training Large Language Models" | | Educational Content | Technical Deep Dive | In-depth, pedagogical explanations of AI/ML concepts, methodologies, or best practices with theoretical foundations. Focuses on building deep technical understanding through theory rather than implementation. Includes mathematical concepts and practical implications. | • "Understanding Transformer Architecture: From Theory to Practice" <br>• "Deep Dive into Reinforcement Learning: Mathematical Foundations" | | Educational Content | Technical Guide | Comprehensive, practical explanations of technical topics, tools, processes, or practices in AI/ML. Focuses on practical understanding and application without deep theoretical foundations. Includes best practices, common pitfalls, and decision-making frameworks. | • "ML Model Version Control Best Practices" <br>• "A Complete Guide to ML Project Documentation Standards" | | Educational Content | Tutorial | Step-by-step instructional content teaching specific AI/ML concepts, techniques, or tools. Emphasizes hands-on learning with clear examples and code snippets. Includes working examples and troubleshooting tips. | • "Building a RAG System with LangChain: Step-by-Step Guide" <br>• "Implementing YOLO Object Detection from Scratch" | | Real-World Applications | Applied Solution Showcase | Technical implementations of AI/ML solutions solving specific real-world problems in industry contexts. Focuses on technical architecture, implementation methodology, and engineering decisions. Documents specific problem context and technical evaluations. | • "Custom RAG Implementation for Legal Document Processing" <br>• "Building a Real-time ML Pipeline for Manufacturing QC" | | Real-World Applications | Case Study | Analysis of AI/ML implementations in specific organizational contexts, focusing on business problem, solution approach, and impact. Documents complete journey from problem identification to solution impact. Emphasizes business context over technical details. | • "AI Transformation at XYZ Bank: From Legacy to Innovation" <br>• "Implementing Predictive Maintenance in Aircraft Manufacturing" | | Real-World Applications | Technical Product Showcase | Presents specific AI/ML products, platforms, or services developed for user adoption. Focuses on features, capabilities, and practical benefits rather than implementation details. Includes use cases and integration scenarios. | • "IntellAI Platform: Enterprise-grade ML Operations Suite" <br>• "AutoML Pro: Automated Model Training and Deployment Platform" | | Real-World Applications | Solution Implementation Guide | Step-by-step guides for implementing specific AI/ML solutions in production environments. Focuses on practical deployment steps and operational requirements. Includes infrastructure setup, security considerations, and maintenance guidance. | • "Production Deployment Guide for Enterprise RAG Systems" <br>• "Setting Up MLOps Pipeline with Azure and GitHub Actions" | | Real-World Applications | Industry Report | Analytical reports examining current state, trends, and impact of AI/ML adoption in specific industries. Provides data-driven insights about adoption patterns, challenges, and success factors. Includes market analysis and future outlook. | • "State of AI in Financial Services 2024" <br>• "ML Adoption Trends in Healthcare: A Comprehensive Analysis" | | Real-World Applications | White Paper | Strategic documents proposing approaches to industry challenges using AI/ML solutions. Focuses on problem analysis, solution possibilities, and strategic recommendations. Provides thought leadership and actionable recommendations. | • "AI-Driven Digital Transformation in Banking" <br>• "Future of Healthcare: AI Integration Framework" | | Technical Assets | Dataset Contribution | Creation and publication of datasets for AI/ML applications. Focuses on data quality, comprehensive documentation, and usefulness for specific ML tasks. Includes collection methodology, preprocessing steps, and usage guidelines. | • "MultiLingual Customer Service Dataset: 1M Labeled Conversations" <br>• "Medical Image Dataset for Anomaly Detection" | | Technical Assets | Open Source Contribution | Contributions to existing open-source AI/ML projects. Focuses on collaborative development and community value. Includes clear description of changes, motivation, and impact on the main project. | • "Optimizing Inference Speed in Hugging Face Transformers" <br>• "Adding TPU Support to Popular Deep Learning Framework" | | Technical Assets | Tool/App/Software | Introduction and documentation of specific software implementations utilizing AI/ML. Focuses on tool's utility, functionality, and practical usage rather than theoretical foundations. Includes comprehensive usage information and technical specifications. | • "FastEmbed: Efficient Text Embedding Library" <br>• "MLMonitor: Real-time Model Performance Tracking Tool" | --DIVIDER-- Understanding these publication types helps authors: - Select the most appropriate format for their work - Focus on essential elements for their chosen type - Meet audience expectations for their publication category - Structure content according to type-specific standards The evaluation criteria and scoring process vary by publication type to reflect their different purposes and requirements. Later sections detail how specific quality criteria apply to each type. This classification system ensures publications effectively serve their intended purpose while maintaining consistent quality standards across different types of content. --DIVIDER-- :::info{title="Publication Type Selection"} Choose your publication type based on your primary goal. For example: - Sharing new research findings? Select Research Paper - Teaching a specific skill? Choose Tutorial - Documenting a business solution? Use Case Study - Releasing a new tool? Pick Tool/App/Software :::--DIVIDER--# 4. Evaluation Criteria Structure The evaluation rubric uses standardized criteria components to ensure consistent assessment. Each component serves a specific purpose in helping evaluators make objective decisions. ## 4.1 Criteria Components **1. Criterion Name** A clear, descriptive title that identifies what aspect of the publication is being evaluated. **2. Criterion Description** The description defines what the criterion measures and provides complete context for evaluation. It specifies where in the publication to look for evidence, clarifies what qualifies as meeting the criterion, and identifies any special cases or exceptions. A good criterion description removes ambiguity about what constitutes meeting the standard. **3. Scoring Logic** The rubric uses binary scoring (0 or 1) with explicit rules stating what merits each score. This ensures evaluators have clear guidelines for assessment decisions. The scoring logic aims to remove subjectivity from the evaluation process by providing specific, measurable requirements. **4. Positive Indicators** Positive indicators are observable evidence in the publication that signal a criterion has been met. They provide concrete, verifiable signs that evaluators can look for during assessment. For example, if evaluating code quality, a positive indicator might be "Code includes descriptive comments explaining each major function." These are specific elements that can be visually identified or objectively verified in the publication. **5. Negative Indicators** Negative indicators are observable evidence that a criterion has not been met. They represent specific, verifiable red flags that evaluators can spot during review. Following the code quality example, a negative indicator might be "Functions lack parameter descriptions" or "No comments explaining complex logic." These indicators point to concrete, observable issues rather than subjective judgments. ## 4.2 Purpose of Standardized Components This structured approach promotes objective evaluation through clear rules and consistent assessment standards. When all evaluators use the same detailed criteria, they can arrive at similar scoring decisions independently. The components also provide actionable feedback - authors know exactly what they need to improve based on which criteria they did not meet. The detailed criteria structure means publication creators can understand requirements before they begin writing. This helps them include necessary elements and avoid common problems that would reduce their evaluation scores. Let's examine how these components work together through an example...--DIVIDER-- ## 4.3 Example Criterion: Clear Purpose and Objectives This fundamental criterion serves as a good example because it demonstrates how seemingly subjective requirements ("clarity of purpose") can be evaluated objectively through specific indicators. **Criterion Definition** ``` Evaluates whether the publication explicitly states its core purpose within the first paragraph or two. The purpose statement must clearly indicate what specific problem is being solved, what will be learned, or what will be demonstrated. This must appear in the abstract, tl;dr, introduction, or overview section and be immediately clear without requiring further reading. The key differentiator is an explicit, specific purpose statement near the top that lets readers immediately understand what the publication will deliver. ``` **Scoring Logic** ``` - Score 0: Purpose is unclear, appears too late, requires inference, or is too vague - Score 1: Explicit purpose statement appears in first paragraph/10 sentences and clearly states specific deliverables ``` **Positive Indicators** ``` Evaluators look for these observable elements: - States specific purpose in first paragraph - Uses explicit purpose statement phrases ("This paper demonstrates...", "In this guide, you will learn...") - Lists specific skills or knowledge to be gained - States exact problem being solved - Defines precise scope of work - Indicates specific contributions or solutions - Provides clear list of deliverables ``` **Negative Indicators** ``` Evaluators watch for these red flags: - No purpose or objective stated - Purpose appears after several paragraphs - Requires reading multiple paragraphs to understand goal - Lists multiple potential purposes - Purpose scattered across document - Ambiguous or general statements - Purpose must be pieced together from multiple sections ``` **Why This Definition Works** Notice how this criterion converts the abstract concept of "clear purpose" into specific, verifiable elements: 1. Location is objective - must appear in first paragraph 2. Phrasing is verifiable - looks for specific statement types 3. Content is measurable - checks for concrete deliverables 4. Assessment is binary - either meets all requirements or does not The indicators remove subjectivity by specifying exactly what evaluators should look for. Authors know precisely where to put their purpose statement and what it should contain. --DIVIDER-- ## 4.4 Complete List of Evaluation Criteria The following table lists all technical criteria used in the evaluation rubric: ![criteria-list.svg](criteria-list.svg)--DIVIDER--For detailed definitions of each criterion, including complete descriptions, scoring logic, and positive/negative indicators, refer to the supplementary document **Publication Evaluation Criteria Reference Guide.pdf** uploaded with this publication. Authors and evaluators should consult this reference when preparing or assessing publications. --DIVIDER--# 5. Publication Types and Evaluation Criteria The evaluation rubric defines specific criteria for each publication type on Ready Tensor. These criteria ensure publications effectively serve their intended purpose and audience. By systematically answering core questions about purpose, significance, trustworthiness, and usability, authors can create high-quality publications that meet audience needs. The complete mapping of criteria to publication types is provided in zipped package titled `Scoring Criteria Per Publication Type.zip` in the **Resources** section. While specific requirements vary, all criteria support answering the core questions in ways that match each publication type's purpose and audience expectations. --DIVIDER--:::caution{title="About the Evaluation Rubric"} The rubric provides a scoring mechanism where publications earn points by meeting different criteria. A higher score indicates stronger technical quality and completeness. Publications do not need to meet all criteria - the score reflects how many criteria are satisfied. For competitions, while scoring helps identify quality submissions, exceptional publications often provide unique insights, innovative approaches, or significant practical value beyond standard requirements. :::--DIVIDER--# 6. How the Scoring Mechanism Works The evaluation rubric uses a straightforward scoring system based on objective criteria per publication type. The evaluation follows these steps: 1. **Publication Type**: Determine the specific type based on content and purpose (e.g., Research Paper, Tutorial, Dataset) 2. **Applicable Criteria**: Apply the criteria set defined for that publication type 3. **Binary Assessment**: Score each criterion: - 1 point if criterion is met - 0 points if criterion is not met 4. **Equal Weighting**: Each criterion carries equal weight of one point 5. **Total Score**: Sum the points across all applicable criteria 6. **Final Assessment**: Compare total points to maximum possible score for that publication type. The score can be converted to a percentage for easier interpretation and simplistic comparison across different publications. --DIVIDER--:::info{title="Note on Scoring and Competition Evaluation"} This rubric adopts a simple approach where all criteria carry equal weight. Future versions may introduce weighted scoring to emphasize specific aspects like innovation or practical impact. While the rubric helps identify quality publications through objective criteria, competition winners are selected based on additional factors. A publication scoring 22/25 might win over one scoring 25/25 if it demonstrates exceptional innovation or practical value. The rubric serves as a baseline quality check rather than the sole determinant of competition outcomes. :::--DIVIDER--# 7. Example Publication Evaluation To demonstrate how the evaluation rubric works in practice, let us examine a real publication: [Decade of AI and ML Conferences: A Comprehensive Dataset for Advanced Research and Analysis](https://app.readytensor.ai/publications/iERF3DYAwsD9). ## 7.1 Evaluation Criteria for the Publication This publication falls under the "Dataset Contribution" type under the "Technical Assets" category. The evaluation rubric defines 29 specific criteria for this publication type, ensuring it meets the intended purpose and audience expectations. These are listed in the following figure. ![Dataset Contribution Scoring Criteria.png](Dataset%20Contribution%20Scoring%20Criteria.png) The technical content and resources provided by the authors are evaluated against these criteria to determine the publication's quality and effectiveness. The evaluation process involves systematically answering core questions about the publication's purpose, significance, trustworthiness, and usability. --DIVIDER--## 7.2 Evaluation Report We have attached the evaluation report for this publication in the document titled "Evaluation Report - Decade of AI and ML Conferences.pdf" in the **Resources** section. This report provides a detailed assessment of the publication based on the defined criteria. This dataset contribution publication scores 25 out of 29 possible points, meeting most quality criteria. The four criteria not met are listed in following table: | Criteria | Explanation | Recommendation | | -------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | 1. Data Inclusion Criteria | The publication does not provide clear criteria for data inclusion or exclusion from the dataset. While it describes the dataset and its contents, it lacks explicit rules or rationale for how data was selected or filtered, which is essential for transparency and reproducibility. | Include a section that outlines the criteria for data inclusion and exclusion, providing clear rules, justifications for filtering decisions, and any edge cases that were considered. | | 3. Limitations Discussion | The publication does not discuss any limitations, trade-offs, or potential issues related to the project work. There is no mention of key limitations, scope boundaries, or the impact of any limitations, which are essential for a comprehensive understanding of the research. | Include a section that discusses the limitations of the dataset and the Mini-RAG system, addressing any potential issues, trade-offs, and the impact of these limitations on the research outcomes. | | 3. Future Directions | The publication does not discuss any future directions or research gaps. While it provides a comprehensive overview of the dataset and its applications, it lacks specific suggestions for future work or improvements, which are necessary to score positively on this criterion. | Include a section that outlines specific future research directions, identifies research gaps, or suggests potential improvements to the current system. | | 4. Contact Information | The publication does not provide any contact information or support channels for users to reach out for questions or issues. There are no references to external channels such as GitHub issues, support email addresses, or community forums. | Include contact information for the creators or maintainers, such as an email address or links to support channels, to assist users in getting help or reporting issues. | The report also provides recommendations for addressing these gaps and improving the publication's quality. Authors can use this feedback to enhance their work and meet the criteria more effectively. This example demonstrates how the evaluation rubric identifies both strengths and specific areas for improvement in a publication. Authors can use these insights to enhance their work while maintaining flexibility in how they address certain criteria. --DIVIDER--# 8. Summary This evaluation rubric provides a structured approach to assessing AI and data science publications on Ready Tensor. The rubric: - Defines clear quality criteria for different publication types - Uses binary (met/not met) scoring for objective assessment - Adapts requirements based on publication category and type - Provides specific indicators of quality for each criterion - Enables constructive feedback through detailed recommendations Authors can use this rubric as a guide when preparing their publications. Meeting the criteria ensures publications provide clear value through: - Well-defined purpose and objectives - Comprehensive technical documentation - Proper supporting materials - Clear practical applications - Reproducible implementations For competition participants, while meeting these criteria establishes baseline quality, exceptional publications often demonstrate unique insights, innovative approaches, or substantial practical impact beyond the basic requirements. The included example evaluation demonstrates how the rubric works in practice, showing both strengths and opportunities for improvement in a real publication. This practical approach helps authors understand exactly what makes their publications effective and how to enhance their contributions to the AI and data science community. --DIVIDER--
yzN0OCQT7hUS
ready-tensor
cc-by-sa
One Model, Five Superpowers: The Versatility of Variational Auto-Encoders
![hero copy.jpg](hero%20copy.jpg)--DIVIDER--# TL;DR Variational Auto-Encoders (VAEs) are versatile deep learning models with applications in data compression, noise reduction, synthetic data generation, anomaly detection, and missing data imputation. This publication demonstrates these capabilities using the MNIST dataset, providing practical insights for AI/ML practitioners. -----DIVIDER--# Introduction Variational Auto-Encoders (VAEs) are powerful generative models that exemplify unsupervised deep learning. They use a probabilistic approach to encode data into a distribution of latent variables, enabling both data compression and the generation of new, similar data instances. VAEs have become crucial in modern machine learning due to their ability to learn complex data distributions and generate new samples without requiring explicit labels. This versatility makes them valuable for tasks like image generation, enhancement, anomaly detection, and noise reduction across various domains including healthcare, autonomous driving, and multimedia generation. This publication demonstrates five key applications of VAEs: data compression, data generation, noise reduction, anomaly detection, and missing data imputation. By exploring these diverse use cases, we aim to showcase VAEs' versatility in solving various machine learning problems, offering practical insights for AI/ML practitioners. To illustrate these capabilities, we use the MNIST dataset of handwritten digits. This well-known dataset, consisting of 28x28 pixel grayscale images, provides a manageable yet challenging benchmark for exploring VAEs' performance in different data processing tasks. Through our examples with MNIST, we demonstrate how VAEs can effectively handle a range of challenges, from basic image compression to more complex tasks like anomaly detection and data imputation. Check the **Models** section for the github code repository for this publication.--DIVIDER--:::info{title="Note"} Although the original MNIST images are in black and white, we have utilized color palettes in our visualizations to make the demonstrations more visually engaging. :::--DIVIDER--# Understanding VAEs <h2> Basic Concept and Architecture</h2> VAEs are a class of generative models designed to encode data into a compressed latent space and then decode it to reconstruct the original input. The architecture of a VAE consists of two main components: the encoder and the decoder. --DIVIDER-- ![VAE_architecture.png](VAE_architecture.png)--DIVIDER--The diagram above illustrates the key components of a VAE: 1. <b>Encoder:</b> Compresses the input data into a latent space representation. 2. <b>Latent Space (Z):</b> Represents the compressed data as a probability distribution, typically Gaussian. 3. <b>Decoder:</b> Reconstructs the original input from a sample drawn from the latent space distribution. --DIVIDER--The encoder takes an input, such as an image, call it $X$, and compresses it into a set of parameters defining a probability distribution in the latent space—typically the mean and variance of a Gaussian distribution. This probabilistic approach is what sets VAEs apart; instead of encoding an input as a single point, it is represented as a distribution over potential values. The decoder then uses a sample from this distribution to reconstruct the original input (shows as $$\hat{X}$$). This sampling process would normally make the process non-differentiable. To overcome this challenge, VAEs use the so-called "reparameterization trick," which allows the model to back-propagate gradients through random operations by decomposing the sampling process into deterministic and stochastic components. This makes the VAE end-to-end differentiable which enables training using backpropagation.--DIVIDER--<h2> Comparison with Traditional Auto-Encoders </h2> While VAEs share some similarities with traditional auto-encoders, they have distinct features that set them apart. Understanding these differences is crucial for grasping the unique capabilities of VAEs. The following table highlights key aspects where VAEs differ from their traditional counterparts: --DIVIDER--| Aspect | Traditional Auto-Encoders | Variational Auto-Encoders (VAEs) | | --------------------- | ---------------------------------------- | ------------------------------------------------ | | Latent Space | • Deterministic encoding | • Probabilistic encoding | | | • Fixed point for each input | • Distribution (mean, variance) | | Objective Function | • Reconstruction loss | • Reconstruction loss + KL divergence | | | • Preserves input information | • Balances reconstruction and prior distribution | | Generative Capability | • Limited | • Inherently generative | | | • Primarily for dimensionality reduction | • Can generate new, unseen data | | Applications | • Feature extraction | • All traditional AE applications, plus: | | | • Data compression | • Synthetic generation | | | • Noise reduction | | | | • Missing Data Imputation | | | | • Anomaly Detection | | | Sampling | • Not applicable | • Can sample different points for same input | | Primary Function | • Data representation | • Data generation and representation |--DIVIDER--# VAE Example in PyTorch To better understand the practical implementation of a Variational Autoencoder, let's examine a concrete example using PyTorch, a popular deep learning framework. This implementation is designed to work with the MNIST dataset, encoding 28x28 pixel images into a latent space and then reconstructing them. The full code is available here: [Jupyter Notebook](https://github.com/readytensor/rt_img_compression_autoencoder/blob/main/src/vae.ipynb) The following code defines a VAE class that includes both the encoder and decoder networks. It also implements the reparameterization trick, which is crucial for allowing backpropagation through the sampling process. Additionally, we'll look at the loss function, which combines reconstruction loss with the Kullback-Leibler divergence to ensure the latent space has good properties for generation.--DIVIDER-- ```python class VAE(nn.Module): def __init__(self, latent_dim): super(VAE, self).__init__() # Encoder self.conv1 = nn.Conv2d(1, 32, kernel_size=3, stride=2, padding=1) # Input is 1x28x28, output is 32x14x14 self.conv2 = nn.Conv2d(32, 64, kernel_size=3, stride=2, padding=1) # Output is 64x7x7 self.fc1 = nn.Linear(64 * 7 * 7, 400) self.fc21 = nn.Linear(400, latent_dim) # mu self.fc22 = nn.Linear(400, latent_dim) # logvar # Decoder self.fc3 = nn.Linear(latent_dim, 400) self.fc4 = nn.Linear(400, 64 * 7 * 7) self.conv2_t = nn.ConvTranspose2d(64, 32, kernel_size=3, stride=2, padding=1, output_padding=1) # Output is 32x14x14 self.conv1_t = nn.ConvTranspose2d(32, 1, kernel_size=3, stride=2, padding=1, output_padding=1) # Output is 1x28x28 def encode(self, x): x = F.relu(self.conv1(x)) x = F.relu(self.conv2(x)) x = x.view(-1, 64 * 7 * 7) x = F.relu(self.fc1(x)) return self.fc21(x), self.fc22(x) def reparameterize(self, mu, logvar): std = torch.exp(0.5 * logvar) eps = torch.randn_like(std) return mu + eps * std def decode(self, z): z = F.relu(self.fc3(z)) z = F.relu(self.fc4(z)) z = z.view(-1, 64, 7, 7) z = F.relu(self.conv2_t(z)) z = torch.sigmoid(self.conv1_t(z)) return z def forward(self, x): mu, logvar = self.encode(x) z = self.reparameterize(mu, logvar) return self.decode(z), mu, logvar # Loss function def loss_function(recon_x, x, mu, logvar): # Calculate the Binary Cross Entropy loss between the reconstructed image and the original image BCE = F.binary_cross_entropy(recon_x, x, reduction='sum') # KL divergence measures how one probability distribution diverges from a second, expected probability distribution. # For VAEs, it measures how much information is lost when using the approximations of the distributions. KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp()) return BCE + KLD ``` --DIVIDER--Let's dissect each part of the code to understand how a VAE is built and operates using PyTorch, a popular deep learning library. First, we have the constructor: ```python def __init__(self, latent_dim): super(VAE, self).__init__() # Encoder self.conv1 = nn.Conv2d(1, 32, kernel_size=3, stride=2, padding=1) # Input is 1x28x28, output is 32x14x14 self.conv2 = nn.Conv2d(32, 64, kernel_size=3, stride=2, padding=1) # Output is 64x7x7 self.fc1 = nn.Linear(64 * 7 * 7, 400) self.fc21 = nn.Linear(400, latent_dim) # mu self.fc22 = nn.Linear(400, latent_dim) # logvar # Decoder self.fc3 = nn.Linear(latent_dim, 400) self.fc4 = nn.Linear(400, 64 * 7 * 7) self.conv2_t = nn.ConvTranspose2d(64, 32, kernel_size=3, stride=2, padding=1, output_padding=1) # Output is 32x14x14 self.conv1_t = nn.ConvTranspose2d(32, 1, kernel_size=3, stride=2, padding=1, output_padding=1) ``` The `__init__` method initializes the VAE. It takes latent_dim as an argument, specifying the size of the latent space, a key feature of the VAE that determines the dimensionality of the encoded representation. It contains the definition of the encoder and decoder parts. <h2> Encoder Network</h2> ```python self.conv1 = nn.Conv2d(1, 32, kernel_size=3, stride=2, padding=1) self.conv2 = nn.Conv2d(32, 64, kernel_size=3, stride=2, padding=1) self.fc1 = nn.Linear(64 * 7 * 7, 400) self.fc21 = nn.Linear(400, latent_dim) # Mean (mu) self.fc22 = nn.Linear(400, latent_dim) # Log variance (logvar) ``` The Encoder consists of convolutional layers followed by fully connected layers. The convolutional layers help in capturing spatial hierarchies in the image data, reducing its dimensionality before it is mapped to the latent space parameters by the fully connected layers. <h2> Decoder Network </h2> ```python self.fc3 = nn.Linear(latent_dim, 400) self.fc4 = nn.Linear(400, 64 * 7 * 7) self.conv2_t = nn.ConvTranspose2d(64, 32, kernel_size=3, stride=2, padding=1, output_padding=1) self.conv1_t = nn.ConvTranspose2d(32, 1, kernel_size=3, stride=2, padding=1, output_padding=1) ``` The Decoder utilizes transposed convolutional layers to perform the inverse operation of the encoder, upscaling the encoded latent representations back to the original image dimensions. <h2> Loss function</h2> ```python def loss_function(recon_x, x, mu, logvar): BCE = F.binary_cross_entropy(recon_x, x, reduction='sum') KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp()) return BCE + KLD ``` The loss function combines binary cross-entropy (BCE) for reconstruction loss and the KL divergence (KLD) for regularizing the latent space distribution. <h2> Additional Methods</h2> ```python def encode(self, x): x = F.relu(self.conv1(x)) x = F.relu(self.conv2(x)) x = x.view(-1, 64 * 7 * 7) x = F.relu(self.fc1(x)) return self.fc21(x), self.fc22(x) def reparameterize(self, mu, logvar): std = torch.exp(0.5 * logvar) eps = torch.randn_like(std) return mu + eps * std def decode(self, z): z = F.relu(self.fc3(z)) z = F.relu(self.fc4(z)) z = z.view(-1, 64, 7, 7) z = F.relu(self.conv2_t(z)) z = torch.sigmoid(self.conv1_t(z)) return z def forward(self, x): mu, logvar = self.encode(x) z = self.reparameterize(mu, logvar) return self.decode(z), mu, logvar ``` - Encode Function: Transforms the input image into two sets of parameters in the latent space, representing the means and log variances.<br> - Reparameterize Function: Uses the reparameterization trick to allow for gradient backpropagation through stochastic processes.<br> - Decode Function: Reconstructs the image from the latent space representation. --DIVIDER--:::info{title="Info"} <h2>Note on Model Architecture</h2> It's important to note that the architecture of Variational Auto-Encoders (VAEs) is highly adaptable and does not need to be confined to any specific type of layer or structure. VAEs can be designed using a variety of architectural components to suit specific tasks and data types. While convolutional layers are ideal for image data, fully connected (linear) layers may be better suited for tabular data. For sequential or time series data, incorporating LSTM (Long Short-Term Memory) layers can be highly effective. This flexibility allows VAEs to be tailored to a wide range of applications, optimizing performance across different types of data. :::--DIVIDER--:::info{title="Info"} <h2>What is Reparameterization?</h2> In the context of a VAE, the encoder network generates two parameters: mean (mu) and log-variance (logvar) of a Gaussian distribution. Instead of directly sampling from this distribution (which would inhibit gradient flow because sampling is a stochastic process), the reparameterization trick is used to decompose the sampling process into a deterministic part and a stochastic part. <br><br> <h3>Breakdown of the reparameterize Function</h3> ```python def reparameterize(self, mu, logvar): std = torch.exp(0.5 * logvar) # Convert log-variance to standard deviation eps = torch.randn_like(std) # Generate random noise with a standard normal distribution return mu + eps * std # Scale and shift the noise to create the sample ``` 1. Convert Log-Variance to Standard Deviation: - `std = torch.exp(0.5 * logvar)` The log variance (`logvar`) is transformed into the standard deviation (`std`). This transformation is necessary because the variance must be non-negative and the logarithm of variance can range from negative infinity to positive infinity, making it easier to optimize. The 0.5 factor is due to the properties of logarithms (since variance = exp(logvar) and std = sqrt(variance)). 2. Generate Random Noise: - `eps = torch.randn_like(std)` Random noise `eps` is generated from a standard normal distribution (mean = 0, std = 1) with the same shape as the standard deviation. This randomness introduces the stochastic element needed for the generative process. 3. Scale and Shift the Noise: - `return mu + eps * std` The noise is scaled by the standard deviation and shifted by the mean (`mu`). This step effectively samples from the Gaussian distribution defined by `mu` and `std`, but in a way that allows the gradients to flow back through the parameters `mu` and `logvar` during training. ::: --DIVIDER--# Applying VAEs: From Theory to Practice Now that we've explored the theoretical underpinnings of VAEs and examined a concrete implementation in PyTorch, let's dive into the practical applications of this powerful model. We'll start by focusing on one of the most fundamental capabilities of VAEs: data compression. In the following sections, we'll demonstrate how VAEs can be utilized for efficient data compression, using the MNIST dataset as our example. This application showcases the VAE's ability to capture the essence of complex data in a compact latent representation, a feature that has significant implications for data storage, transmission, and processing. --DIVIDER--:::info{title="Note on Applicability"} While our examples use MNIST for simplicity, the principles of VAE applications extend to various real-world datasets. These techniques can be adapted for diverse scenarios, from image processing to tabular data to time series analysis, offering powerful solutions for data compression, generation, denoising, anomaly detection, and imputation across different domains. :::--DIVIDER--## Data Compression and Dimensionality Reduction Modern data-driven applications often require efficient methods for data compression and dimensionality reduction to manage storage, processing, and transmission costs. Variational Autoencoders (VAEs) offer a powerful solution to this challenge, particularly for complex, high-dimensional data like images. <h2> How VAEs Compress MNIST Images </h2> Variational Auto-Encoders offer a novel approach to data compression through their probabilistic latent space. When applying VAEs to the MNIST dataset, the process involves:<br><br> 1. Encoding: Each 28x28 pixel image of the MNIST dataset, representing handwritten digits, is input into the encoder part of the VAE. The encoder network compresses this image into a much smaller set of latent variables, capturing the essential features of the image in terms of mean and variance. Latent Space Representation: The critical information of each image is stored in a lower-dimensional latent space, where the size of this space is significantly smaller than the original image size, effectively compressing the image data.<br><br> 2. Decoding: The decoder part of the VAE then takes these latent variables and reconstructs the image, aiming to match the original as closely as possible. The training process involves tuning the encoder and decoder to minimize the loss, ensuring that the essential features are preserved. <h2> Visualizing Compressed vs. Original Digits </h2> To demonstrate the effectiveness of VAEs in compressing MNIST images, we can visualize the original and the reconstructed images side by side: ![vae_reconstruction.jpg](vae_reconstruction.jpg) The results show how VAEs can effectively compress the 28x28 pixel images of handwritten digits into a lower-dimensional latent space of size 10 that is 1.2% of the original size. Despite this significant reduction in dimensionality, the reconstructed images closely resemble the originals, demonstrating the VAE's powerful ability to capture essential features while compressing the data.--DIVIDER--## Data Generation <h2>The Need for Synthetic Data in AI/ML </h2> Synthetic data generation plays a crucial role in AI/ML, especially when real data is scarce, sensitive, or expensive to collect. It's valuable for augmenting training datasets, improving model robustness, and providing controlled scenarios for testing and validation. <br/><br/> <h2> Generating New MNIST-like Digits with VAEs</h2> VAEs stand out in their ability to generate new data that mimics the original training data. Here’s how VAEs can be used to generate new, MNIST-like digits:<br><br> 1. **Training**: A VAE is first trained on the MNIST dataset, learning the underlying distribution of the data represented in a latent space. <br/> 2. **Sampling**: After training, new points are sampled from the latent space distribution. Because this space has been regularized during training (encouraged to approximate a Gaussian distribution), the samples are likely to be meaningful.<br/> 3. **Decoding**: These sampled latent points are then passed through the decoder, which reconstructs new digits that reflect the characteristics of the training data but are novel creations. <br/> <h2> Exploring the Latent Space: Morphing Between Digits</h2> One of the fascinating capabilities of VAEs is exploring and visualizing the continuity and interpolation capabilities within the latent space:<br><br> 1. Continuous Interpolation: By choosing two points in the latent space corresponding to different digits, one can interpolate between these points. The decoder generates outputs that gradually transition from one digit to another, illustrating how features morph seamlessly from one to the other.<br><br> 2. Visualizing Morphing: This can be visualized by creating a sequence of images where each image represents a step from one latent point to another. This not only demonstrates the smoothness of the latent space but also the VAE’s ability to handle and mix digit features creatively.<br><br> 3. Insight into Latent Variables: Such explorations provide insights into what features are captured by different dimensions of the latent space (e.g., digit thickness, style, orientation). We trained a VAE on MNIST with a 2D latent space for easy visualization and manipulation. This allows us to observe how changes in latent variables affect generated images. The figure below shows generated images for latent dimension values from -3 to 3 on both axes: ![vae_grid_plot.jpg](vae_grid_plot.jpg) This exploration is not only a powerful demonstration of the model's internal representations but also serves as a tool for understanding and debugging the model’s behavior. --DIVIDER--## Noise Reduction Noise in data is a common issue in various fields, from medical imaging to autonomous vehicles. It can significantly degrade the performance of machine learning models, making effective denoising techniques crucial. <h2> Demonstrating VAE-based Denoising on MNIST</h2> We trained multiple VAEs to remove noise from MNIST images, testing different noise percentages. We created noisy images by randomly replacing a sample of pixels with values from a uniform distribution between 0 and 1. The following images show the denoising performance of VAEs at different levels of noise contamination: ![noisy_vs_denoised_0.05.jpg](noisy_vs_denoised_0.05.jpg) ![noisy_vs_denoised_0.1.jpg](noisy_vs_denoised_0.1.jpg) ![noisy_vs_denoised_0.25.jpg](noisy_vs_denoised_0.25.jpg) ![noisy_vs_denoised_0.5.jpg](noisy_vs_denoised_0.5.jpg) Results seen in the charts above demonstrate the VAE's capability in reconstructing clean images from noisy inputs, highlighting its potential in restoring and enhancing image data usability in practical scenarios. --DIVIDER--## Anomaly Detection Anomaly detection is crucial in various industries, identifying patterns that deviate from expected behavior. These anomalies can indicate critical issues such as fraudulent transactions or mechanical faults. <h2> Using VAEs to Spot Anomalies in MNIST</h2> VAEs can effectively detect anomalies by modeling the distribution of normal data: 1. The VAE is trained on MNIST digits. 2. Anomalies are identified by higher reconstruction loss on test set. 3. A threshold is set to flag digits with excessive loss as anomalies. The histogram below shows reconstruction errors on the test set: ![reconstruction_errors_histogram.jpg](reconstruction_errors_histogram.jpg) The following images show the top 10 digits with the highest loss, representing potential anomalies: ![highest_reconstruction_errors.jpg](highest_reconstruction_errors.jpg) We can confirm that the 10 samples are badly written digits and should be considered anomalies. To further test the VAE's anomaly detection capabilities, we tested the VAE model on images of letters—data that the model was not trained on. This experiment serves two purposes: 1. Validating the model's ability to identify clear out-of-distribution samples. 2. Exploring the nuances of how the model interprets shapes similar to digits. The following chart shows the original images of letters and their reconstructions. ![letter_reconstruction.jpg](letter_reconstruction.jpg) We also marked the reconstruction errors of the samples on the histogram of reconstruction errors from the test set. ![reconstruction_errors_with_letters.jpg](reconstruction_errors_with_letters.jpg) These visualizations reveal several interesting insights: 1. Most letters, except 'Z', show poor reconstructions and high reconstruction errors, clearly marking them as anomalies. 2. The letter 'Z' is reconstructed relatively well, likely due to its similarity to the digit '2'. Its reconstruction error falls within the normal range of the test set. 3. The letter 'M' shows the most distorted reconstruction, corresponding to the highest reconstruction error. This aligns with 'M' being the most dissimilar to any MNIST digit. 4. Interestingly, 'H' is reconstructed to somewhat resemble the digit '8', the closest MNIST digit in shape. While still an anomaly, it has the lowest error among the non-'Z' letters. This experiment highlights: - The VAE's effectiveness in identifying clear anomalies (most letters). - The model's tendency to interpret unfamiliar shapes in terms of the digits it knows. - The importance of shape similarities in the model's interpretation, as demonstrated by the 'Z' and 'H' cases. These observations underscore the VAE's capability in anomaly detection while also revealing its limitations when faced with out-of-distribution data that shares similarities with in-distribution samples.--DIVIDER--## Missing Data Imputation Incomplete data is a common challenge in machine learning, leading to biased estimates and less reliable models. This issue is prevalent in various domains, including healthcare and finance. <h2> Reconstructing Partial MNIST Digits with VAEs </h2> VAEs offer a robust approach to missing data imputation: 1. Training: A VAE learns the distribution of complete MNIST digits. 2. Simulating Missing Data: During training, parts of input digits are randomly masked. The VAE is tasked with reconstructing the full, original digit from this partial input. 3. Inference: When presented with new partial digits, the VAE leverages its learned distributions to infer and reconstruct missing sections, effectively filling in the gaps. This process enables the VAE to generalize from partial information, making it adept at handling various missing data scenarios. The image below demonstrates the VAE's capability in missing data imputation: ![missing_vs_reconstructed.jpg](missing_vs_reconstructed.jpg) These examples illustrate how effectively the VAE infers and reconstructs missing parts of the digits, showcasing its potential for data imputation tasks. --DIVIDER--# VAEs vs. GANs While this publication has focused on Variational Autoencoders (VAEs), it's important to consider how they compare to other popular generative models, particularly Generative Adversarial Networks (GANs). Both VAEs and GANs are powerful techniques for data generation in machine learning, but they approach the task in fundamentally different ways and have distinct strengths and weaknesses. GANs, introduced by Ian Goodfellow et al. in 2014, have gained significant attention for their ability to generate highly realistic images. They work by setting up a competition between two neural networks: a generator that creates fake data, and a discriminator that tries to distinguish fake data from real data. This adversarial process often results in very high-quality outputs, particularly in image generation tasks. Understanding the differences between VAEs and GANs can help practitioners choose the most appropriate model for their specific use case. The following table provides a detailed comparison of these two approaches: The following table provides a detailed comparison of these two approaches: | Aspect | Variational Autoencoders (VAEs) | Generative Adversarial Networks (GANs) | |--------|--------------------------------|----------------------------------------| | Output Quality | Slightly blurrier, but consistent | Sharper, more realistic images | | Training Process | Easier and usually faster to train, well-defined objective function | Can be challenging and time-consuming, potential mode collapse | | Latent Space | Structured and interpretable | Less structured, harder to control | | Versatility | Excel in both generation and inference tasks | Primarily focused on generation tasks | | Stability | More stable training, consistent results | Can suffer from training instability | | Primary Use Cases | Data compression, denoising, anomaly detection, controlled generation | High-fidelity image generation, data augmentation | | Reconstruction Ability | Built-in reconstruction capabilities | No inherent reconstruction ability | | Inference | Capable of inference on new data | Typically requires additional techniques for inference | <h2> When to Choose VAEs over GANs </h2> - Applications requiring both generation and reconstruction capabilities - Tasks needing interpretable and controllable latent representations - Scenarios demanding training stability and result consistency - Projects involving data compression, denoising, or anomaly detection - When balancing generation quality with ease of implementation and versatility - When faster training times are preferred--DIVIDER--# Conclusion This article has demonstrated the versatility of Variational Auto-Encoders (VAEs) across various machine learning applications, including data compression, generation, noise reduction, anomaly detection, and missing data imputation. VAEs' unique ability to model complex distributions and generate new data instances makes them powerful tools for tasks where traditional methods may fall short. We encourage researchers, developers, and enthusiasts to explore VAEs further. Whether refining architectures, applying them to new data types, or integrating them with other techniques, the potential for innovation is vast. We hope this exploration inspires you to incorporate VAEs into your work, contributing to technological advancement and opening new avenues for discovery. -----DIVIDER--# References 1. Kingma, D. P., & Welling, M. (2013). Auto-Encoding Variational Bayes. arXiv preprint arXiv:1312.6114. [https://arxiv.org/abs/1312.6114](https://arxiv.org/abs/1312.6114) 2. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative Adversarial Nets. In Advances in Neural Information Processing Systems (pp. 2672-2680). [https://papers.nips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf](https://papers.nips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf)