📙 AI-DeepLearning Index
📚 16 Posts
🕒 Last Updated: Tue Mar 03 2026
This folder contains AI-DeepLearning-related posts.
| # | Blog Link | Date | Excerpt | Tags |
|---|---|---|---|---|
| 1 | AI-DeepLearning Index | Tue Mar 03 2026 | 📙 Index of AI-DeepLearning posts | |
| 2 | Deep Learning Path 🤖 | Fri Feb 27 2026 | A comprehensive learning path for deep learning, covering foundational concepts, optimization techniques, project structuring, convolutional neural networks, and sequence models. This guide provides a structured approach to mastering deep learning through the Deep Learning Specialization DLS. | Data Science Machine Learning Deep Learning Neural Networks Artificial Intelligence Computational Graphs |
| 3 | Neural Network Hypothesis and Intuition | Fri Feb 27 2026 | Explore the hypothesis and intuition behind neural networks, including their structure, activation functions, and how they process inputs to produce outputs. | Data Science Machine Learning Deep Learning Neural Networks Artificial Intelligence Computational Graphs |
| 4 | Training a Neural Network | Fri Feb 27 2026 | In this post, we will put together all the pieces we've learned about neural networks to understand how to train a neural network effectively. We will cover the cost function, backpropagation, gradient checking, and random initialization, along with key intuitions for each step. | Data Science Machine Learning Deep Learning Neural Networks Artificial Intelligence Computational Graphs |
| 5 | Forward Propagation in Neural Networks | Fri Feb 27 2026 | Understand how forward propagation works in neural networks. Learn how inputs move through layers, how weights and biases transform data, and how activation functions generate predictions in deep learning models. | Data Science Machine Learning Deep Learning Neural Networks Artificial Intelligence Forward Propagation Computational Graphs |
| 6 | Vectorized Neural Networks Model Representation | Fri Feb 27 2026 | Learn how to represent neural networks in a vectorized form, transforming scalar equations into efficient matrix operations for scalable and optimized computations. | Data Science Machine Learning Deep Learning Neural Networks Artificial Intelligence Computational Graphs Vectorization Matrix Operations |
| 7 | Examples and Intuitions I — Neural Networks as Logical Gates | Fri Feb 27 2026 | A simple example of applying neural networks is predicting logical operations like AND and OR. By choosing appropriate weights and bias, a single logistic neuron can simulate these gates. This illustrates the power of neural networks to represent complex functions by stacking simple units. | Data Science Machine Learning Deep Learning Neural Networks Artificial Intelligence Computational Graphs |
| 8 | Examples and Intuitions II — Building XNOR with a Hidden Layer | Fri Feb 27 2026 | In the previous section, we saw how to implement basic logical gates (AND, OR, NOR) using single neurons. However, some functions like XOR and XNOR cannot be represented by a single neuron. In this post, we will see how adding a hidden layer allows us to model the XNOR function. | Data Science Machine Learning Deep Learning Neural Networks Artificial Intelligence Computational Graphs |
| 9 | Multiclass Classification with Neural Networks | Fri Feb 27 2026 | Learn how to extend binary classification to multiclass classification using neural networks, where the output layer consists of multiple units representing different classes, and the final prediction is made by selecting the class with the highest output value. | Data Science Machine Learning Deep Learning Neural Networks Artificial Intelligence Computational Graphs |
| 10 | Cost Function for Neural Networks | Fri Feb 27 2026 | The cost function for neural networks generalizes the logistic regression cost to multiple output units and includes regularization over all weights in the network. This post breaks down the cost function, explaining the double and triple summations, and provides intuition for how it works. | Data Science Machine Learning Deep Learning Neural Networks Artificial Intelligence Computational Graphs |
| 11 | Backpropagation Algorithm | Fri Feb 27 2026 | Backpropagation is the algorithm used to minimize the neural network cost function. It computes the gradients of the cost function with respect to the parameters, allowing us to perform gradient descent and update our model. | Data Science Machine Learning Deep Learning Neural Networks Artificial Intelligence Computational Graphs |
| 12 | Backpropagation Intuition | Fri Feb 27 2026 | Backpropagation is the algorithm used to compute the gradients of the cost function with respect to the parameters in a neural network. This post provides an intuitive understanding of how backpropagation works and why it is essential for training deep learning models. | Data Science Machine Learning Deep Learning Neural Networks Artificial Intelligence Computational Graphs |
| 13 | Gradient Checking and Random Initialization | Fri Feb 27 2026 | Gradient checking is a technique to verify the correctness of your backpropagation implementation. Random initialization is crucial for breaking symmetry and allowing the network to learn effectively. | Data Science Machine Learning Deep Learning Neural Networks Artificial Intelligence Computational Graphs |
| 14 | Dimensionality Reduction in Machine Learning | Fri Feb 27 2026 | Learn how dimensionality reduction simplifies high-dimensional data while preserving important patterns. Explore techniques like PCA and understand how reducing features improves model performance, visualization, and computational efficiency. | Data Science Machine Learning Deep Learning Dimensionality Reduction Feature Engineering Principal Component Analysis Artificial Intelligence |
| 15 | Principal Component Analysis (PCA) Explained | Fri Feb 27 2026 | Learn how Principal Component Analysis (PCA) reduces the dimensionality of datasets while preserving important information. Understand the intuition, mathematics, and practical uses of PCA in machine learning and data science. | Data Science Machine Learning Deep Learning Principal Component Analysis Dimensionality Reduction Feature Engineering Artificial Intelligence |
| 16 | Revision Cheat Sheet | Fri Feb 27 2026 | A concise cheat sheet covering core concepts, dimensions, activation functions, forward propagation, cost function, backpropagation, gradient checking, random initialization, training pipeline, and key intuition for neural networks. | Data Science Machine Learning Deep Learning Neural Networks Artificial Intelligence Computational Graphs |
