Search Will's Notes

SearchSearch
      • ๐ŸŒฒ Algorithms
      • ๐Ÿ›ฉ๏ธ Bellman-Ford
      • ๐Ÿ”Ž Binary Search
      • ๐ŸŒณ Binary Search Tree
      • ๐Ÿš‹ Breadth First Search
      • ๐Ÿšƒ Depth First Search
      • ๐Ÿš„ Dijkstra
      • ๐Ÿ’ง Edmonds-Karp
      • โฑ๏ธ Finish-Time DFS
      • ๐Ÿš“ Floyd-Warshall
      • ๐Ÿ“ Hashmap
      • ๐Ÿ—ป Heap
      • โ›ฐ๏ธ Kahn
      • ๐Ÿข Kosaraju
      • ๐ŸšŸ Kruskal
      • ๐Ÿšˆ Prim
      • ๐Ÿ”‘ Priority Queue
      • ๐Ÿ—ผ Union-Find
      • ๐Ÿ–ฅ๏ธ Computer Systems
        • ๐Ÿ›๏ธ Computer Architecture
        • ๐Ÿง  CPU
        • โฑ๏ธ Gate Delay
        • ๐Ÿ“‹ ISA
        • โšก๏ธ Performance
        • ๐Ÿ’ป SystemVerilog
        • โœ… Verification
          • โž• Adder
          • โž— Divider
          • ๐Ÿ›Ÿ Floating Point
          • โœ– Multiplier
          • โžก Shifter
          • ๐Ÿ’• Two's Complement
          • ๐Ÿ•Š๏ธ Branch Prediction
          • ๐ŸŽ๏ธ Bypassing
          • ๐Ÿ—“๏ธ Code Scheduling
          • ๐Ÿš— Datapath
          • ๐Ÿ’ต Hardware Cache
          • ๐ŸŽ Multicore
          • โš”๏ธ Predication
          • โญ๏ธ Superscalar
          • ๐ŸšŒ Bus
          • ๐Ÿ‘พ Digital Logic
          • ๐ŸŽฅ Memory
          • ๐Ÿ“ฆ Register
        • ๐Ÿ“ Concurrency
        • โ˜๏ธ Condition Variable
        • ๐Ÿ’€ Deadlock
        • ๐Ÿ”’ Lock
        • ๐Ÿ Semaphore
        • ๐Ÿšฅ Synchronization
        • โฐ Clock Synchronization
        • ๐ŸŽ—๏ธ Distributed Commit
        • ๐Ÿ“ Distributed Hash Table
        • ๐Ÿ“ฆ Distributed Storage System
        • โ˜๏ธ Distributed System
        • โค๏ธโ€๐Ÿฉน Fault Tolerance
        • ๐Ÿ—บ๏ธ MapReduce
        • ๐Ÿ‘จโ€๐Ÿ‘ฉโ€๐Ÿ‘งโ€๐Ÿ‘ฆ Multicast
        • ๐Ÿ๏ธ Paxos
        • ๐Ÿฅ Recovery
        • ๐Ÿ–จ๏ธ Replication
        • โœจ Spark
          • ๐Ÿ”‘ Bigtable
          • ๐Ÿฆด Coda File System
          • ๐Ÿ”Ž Google File System (GFS)
          • ๐ŸŒž Network File System (NFS)
        • ๐Ÿ“‡ Domain Name System
        • ๐Ÿ“› Naming
        • ๐Ÿ—ผ Networking
        • ๐Ÿ“ž Remote Procedure Call
        • ๐Ÿ’พ Server
        • ๐Ÿงฆ Socket
        • ๐Ÿ’ต Caches and Buffers
        • โŒจ๏ธ Devices
        • ๐Ÿฉ FAT
        • ๐Ÿ“ File Descriptor
        • ๐Ÿ—„๏ธ File System
        • โ„น๏ธ Inode
        • โฐ Interrupt
        • ๐Ÿฐ Memory Allocation
        • โฑ๏ธ Memory Hierarchy
        • โš™๏ธ Operating System
        • ๐Ÿ“„ Page Table
        • ๐Ÿช  Pipe
        • ๐Ÿ’ผ Process
        • โš”๏ธ RAID
        • ๐Ÿ—“๏ธ Scheduler
        • ๐Ÿš Shell
        • ๐Ÿ—ผ Signal
        • ๐Ÿงถ Thread
        • ๐Ÿง  Virtual Memory
      • ๐Ÿ”Ž Canny Edge Detection
      • ๐ŸŽจ Color
      • ๐Ÿ‘๏ธ Computer Vision
      • โ™ป๏ธ Convolution
      • ๐Ÿ”ฅ Feature Matching
      • ๐Ÿฟ Gaussian Kernels
      • ๐Ÿ‘๏ธโ€๐Ÿ—จ๏ธ Geometric Perception
      • ๐Ÿคฟ Gradient Blending
      • ๐Ÿž๏ธ Image
      • ๐Ÿ–Œ๏ธ Image Morphing
      • ๐Ÿ”บ Image Pyramid
      • โŒ›๏ธ SIFT
        • ๐Ÿ“ท Camera Models
        • ๐Ÿ—บ๏ธ Coordinate Systems
        • ๐Ÿ–ผ๏ธ Homography
        • ๐ŸŒž Horizon
        • ๐Ÿ” Intrinsics
        • ๐Ÿ“ฝ๏ธ Projective Geometry
        • โš™๏ธ Transformation
        • โ™ป๏ธ Distance Transfer
        • ๐Ÿ“ Single-View Metrology
        • โšฝ๏ธ Soccer Goal Problem
        • ๐Ÿ“ Two-View Metrology
        • ๐ŸงŠ Marching Cubes
        • ๐Ÿฐ Shape Representations
        • ๐Ÿ”– Coplanar PNP
        • ๐Ÿ“ Epipolar Geometry
        • ๐Ÿ”ฎ Multi-View Stereopsis
        • ๐Ÿ’ฆ Optical Flow
        • ๐Ÿž๏ธ Panorama Projection
        • ๐Ÿ“ Perspective-N-Point
        • โšฐ๏ธ Procrustes
        • ๐Ÿ‘Ÿ Structure From Motion
        • ๐Ÿฟ Two-View Stereopsis
        • ๐Ÿงญ Visual Odometry
        • ๐Ÿ’ง Hough Transform
        • ๐ŸŽฒ RANSAC
      • ๐Ÿง  Deep Learning
        • ๐Ÿšจ Attention
        • ๐Ÿค Backpropagation
        • โ„น๏ธ InfoNCE
        • ๐ŸŽฏ Mean Average Precision
        • ๐ŸŽ Non-Max Suppression
        • โœ‚๏ธ Normalization
        • ๐ŸŽŠ ConvNeXt
        • ๐Ÿ‘๏ธ Convolutional Neural Network
        • ๐Ÿ“ฆ DETR
        • โš™๏ธ EfficientNet
        • ๐Ÿ‘Ÿ Faster R-CNN
        • ๐Ÿ”ญ Fully Convolutional Network
        • ๐ŸŽ“ GradCAM
        • ๐Ÿ‘€ Group Equivariant CNN
        • ๐Ÿ‘บ Mask R-CNN
        • ๐Ÿ’บ Occupancy Network
        • ๐ŸŽพ PointNet
        • ๐Ÿ PointNet++
        • ๐Ÿฆฟ Vision Transformer
        • ๐Ÿ€ YOLO
        • ๐ŸŽฒ Bayesian Neural Network
        • ๐Ÿค– Boltzmann Machine
        • ๐Ÿ•‹ Deep Belief Network
        • ๐ŸŽฌ FiLM
        • ๐Ÿค Graph Neural Network
        • ๐Ÿ•ธ๏ธ Multilayer Perceptron
        • ๐ŸŽฑ Neural ODE
        • ๐Ÿชœ ResNet
        • ๐Ÿšซ Restricted Boltzmann Machine
        • ๐Ÿ•ฐ๏ธ Autoregressive Model
        • ๐Ÿช‘ Conditional 3D Generation
        • ๐Ÿ–Š๏ธ CVAE
        • โ™ป๏ธ CycleGAN
        • ๐Ÿ•ฏ๏ธ Diffusion Probabilistic Model
        • ๐Ÿ–ผ๏ธ Generative Adversarial Network
        • ๐Ÿงฌ Generative Cellular Automata
        • ๐Ÿง‘โ€๐Ÿซ Instruct-NeRF2NeRF
        • ๐ŸŽง NCSN
        • ๐ŸŒž NeRF
        • ๐Ÿ’ฆ Normalizing Flow
        • ๐Ÿ PixelCNN
        • ๐Ÿงจ Score SDE
        • โœจ StyleGAN
        • ๐ŸงŠ TensorRF
        • ๐Ÿฅ‘ unCLIP
        • ๐Ÿ–‹๏ธ Variational Autoencoder
        • ๐Ÿ”ช VQ-VAE
        • ๐Ÿงธ BERT
        • โ›ฉ๏ธ Gated Recurrent Unit
        • ๐ŸŽค Large Language Models
        • ๐ŸŽฅ Long Short-Term Memory
        • ๐Ÿ’ฌ Recurrent Neural Network
        • ๐Ÿงต Seq2Seq
        • โŒ›๏ธ Temporal Convolutional Networks
        • ๐Ÿฆพ Transformer
        • ๐Ÿงฌ Autoencoder
        • ๐Ÿฅพ BYOL
        • ๐ŸŒ CLIP
        • ๐Ÿฆ– DINO
        • ๐Ÿผ MoCo
        • ๐Ÿ‘ฅ Siamese Network
        • ๐ŸŽญ SimCLR
      • ๐Ÿค– Machine Learning
        • ๐Ÿ”ฅ Adaboost
        • ๐ŸŽป Ensemble
        • ๐ŸŽ Gradient Tree Boosting
        • ๐ŸŒฒ Random Forest
        • โš–๏ธ Algorithmic Fairness
        • ๐Ÿ’ฐ Bias Bounties
        • ๐Ÿ”ฉ Bolt-On Bias Mitigation
        • ๐Ÿงญ COMPAS
        • ๐Ÿชถ Multi-Fairness Unsatisfiability Theorem
        • ๐Ÿ”ฎ Oracle Fairness Approach
        • ๐Ÿ• Belief Propagation
        • ๐Ÿงฌ Evidence Lower Bound
        • ๐ŸŒฒ Junction Tree
        • ๐Ÿฟ Kernel Density Estimation
        • ๐Ÿ˜ก Mean Field Approximation
        • ๐Ÿ”ซ Variable Elimination
        • ๐Ÿ“ฆ 3D Metrics
        • ๐ŸŽน Classification Metrics
        • ๐Ÿฅ Generative Metrics
        • ๐ŸŒพ Conditional Random Fields
        • ๐Ÿ’ญ Decision Tree
        • โšก๏ธ Energy-Based Model
        • ๐Ÿ“ผ Gaussian Mixture Model
        • ๐Ÿฅข Generalized Linear Model
        • ๐Ÿ Independent Component Analysis
        • ๐ŸŽ’ K-Means Clustering
        • ๐Ÿ  K-Nearest Neighbors
        • ๐Ÿฏ Kernel Regression
        • ๐Ÿ“„ Latent Dirichlet Allocation
        • ๐Ÿ—ผ Least Mean Squares
        • ๐Ÿญ Linear Factor Model
        • ๐Ÿฆ Linear Regression
        • ๐Ÿฆ  Logistic Regression
        • ๐Ÿ‘ถ Naive Bayes
        • ๐Ÿ‘“ Perceptron
        • ๐Ÿ—œ๏ธ Principle Component Analysis
        • ๐Ÿ”จ Principle Component Regression
        • ๐Ÿš’ Response Surface Methods
        • ๐ŸŽณ Score-Based Models
        • ๐Ÿ›ฉ๏ธ Support Vector Machine
        • ๐Ÿ‘€ AutoML
        • ๐ŸŽ™๏ธ Explainability
        • โ“ Imputation
        • โœ… Validation
        • ๐Ÿฅƒ Annealed Importance Sampling
        • ๐Ÿ–– Contrastive Divergence
        • ๐ŸŽ‰ Expectation Maximization
        • โ›ฐ๏ธ Gradient Descent
        • ๐Ÿ”Ž Greedy Search
        • ๐ŸŽฒ Maximum A Posteriori
        • ๐Ÿช™ Maximum Likelihood Estimate
        • ๐ŸŒฑ Natural Gradient
        • ๐Ÿ“ฃ Noise Contrastive Estimation
        • ๐Ÿ‘” Overfitting
        • ๐Ÿ™ƒ Pseudo-Likelihood
        • โšฝ๏ธ Regularization Penalties
        • ๐ŸŽผ Score Matching
        • ๐Ÿšจ Bayesian Network
        • โฐ Dynamic Bayesian Network
        • ๐Ÿช Factor
        • โ˜‚๏ธ Hidden Markov Model
        • ๐Ÿ—ณ๏ธ Markov Random Field
        • ๐Ÿชฉ Probabilistic Graphical Model
        • โœ‹ Active Learning
        • ๐ŸŽจ Generative Modeling
        • ๐Ÿชฉ Representation Learning
        • ๐ŸŽ“ Supervised Learning
        • ๐Ÿ” Unsupervised Learning
        • โ˜ ๏ธ Curse of Dimensionality
        • ๐Ÿฟ Kernel
        • ๐Ÿฆ„ Log Derivative Trick
        • ๐Ÿช Manifold Hypothesis
        • ๐Ÿฅช No Free Lunch Theorem
        • ๐Ÿช„ Reparameterization Trick
      • ๐Ÿงฎ Mathematics
        • ๐ŸŒŽ Quaternion
        • ๐Ÿง Derivative
        • โ„๏ธ Gradient
        • ๐ŸŽค Taylor Series
        • ๐Ÿชž Equivariance
        • ๐Ÿ—ฟInvariance
        • ๐Ÿ“ Angle
        • ๐Ÿš— Distance
        • ๐ŸŽณ Inner Product
        • ๐Ÿ“Œ Norm
        • ๐Ÿ“ฝ๏ธ Projection
        • ๐Ÿชฉ Rotation
        • ๐Ÿ’ง Cross Entropy
        • ๐Ÿ”ฅ Entropy
        • ๐Ÿชญ F-Divergence
        • ๐Ÿ’ฐ Information Gain
        • โœ‚๏ธ KL Divergence
        • ๐Ÿค Mutual Information
        • ๐Ÿฅง Cholesky Decomposition
        • ๐Ÿ“– Determinant
        • ๐Ÿชท Eigendecomposition
        • ๐Ÿ’ Eigenvalue
        • ๐Ÿ—บ๏ธ Linear Mapping
        • ๐Ÿฑ Matrix
        • ๐Ÿ“Ž Singular Value Decomposition
        • โš™๏ธ System of Linear Equations
        • ๐Ÿ–Š๏ธ Trace
        • ๐Ÿน Vector
        • ๐Ÿ‘  Constrained Optimization
        • ๐Ÿ•น๏ธ Optimal Control
        • ๐Ÿ‘Ÿ Unconstrained Optimization
        • ๐Ÿช™ Bayes' Theorem
        • ๐Ÿ‘จโ€๐Ÿ‘ฉโ€๐Ÿ‘งโ€๐Ÿ‘ฆ Exponential Family
        • ๐Ÿ‘‘ Gaussian
        • ๐Ÿ‡บ๐Ÿ‡ธ Independence
        • ๐ŸŒˆ Jensen's Inequality
        • ๐ŸŽฒ Probability Distribution
        • ๐Ÿช Random Variable
        • ๐Ÿ“š Summary Statistics
        • ๐Ÿ‘Ÿ Total Variation Distance
      • ๐ŸŽฒ Differential Privacy
      • โšก๏ธ Exponential Mechanism
      • ๐Ÿ“Œ Laplace Mechanism
      • ๐Ÿ‘บ Privacy
      • ๐Ÿ“„ Randomized Response
      • โŒ›๏ธ Inverse Reinforcement Learning
      • ๐Ÿ—ผ Offline Reinforcement Learning
      • โ™Ÿ๏ธ Reinforcement Learning
      • ๐Ÿคทโ€โ™‚๏ธ RLHF
        • ๐Ÿ˜ Conservative Q-Learning
        • ๐Ÿ‘พ Deep Q-Learning
        • โœŒ๏ธ Double Q-Learning
        • ๐ŸŽซ Eligibility Traces
        • ๐Ÿ“บ Experience Replay
        • ๐ŸงŠ Least-Squares TD
        • ๐Ÿช™ Monte Carlo Control
        • ๐Ÿชœ N-Step Bootstrapping
        • ๐Ÿš€ Q-Learning
        • ๐Ÿงญ Sarsa
        • โŒ›๏ธ Temporal Difference Learning
        • ๐Ÿงธ Agent
        • ๐Ÿ“– Contextual Bandit
        • โ›“๏ธ Markov Chain
        • ๐ŸŒŽ Markov Decision Process
        • ๐ŸŽฐ Multi-Armed Bandit
        • ๐Ÿช POMDP
        • ๐Ÿ‘‘ Successor Representation
        • ๐ŸŽฒ Entropy Regularization
        • ๐Ÿ’ฐ Epsilon-Greedy
        • ๐Ÿš€ Exploration
        • ๐Ÿ’ฌ Information Gain Exploration
        • ๐Ÿคฉ Optimistic Exploration
        • โ“ Thompson Sampling
        • ๐Ÿงฉ ACT
        • ๐Ÿต Behavioral Cloning
        • ๐Ÿ—ก๏ธ DAgger
        • ๐Ÿฆพ RT-1
        • ๐Ÿงฒ VIP
        • ๐Ÿƒ Feature Matching
        • ๐Ÿฆฎ Guided Cost Learning
        • ๐ŸŽฒ MaxEnt
        • ๐ŸŽฒ Cross Entropy Method
        • ๐Ÿ’ฃ Dyna
        • ๐Ÿงจ Dynamic Programming
        • ๐ŸŒฒ Heuristic Search
        • ๐Ÿ”ฎ Model Predictive Control
        • ๐Ÿ—บ๏ธ Monte Carlo Tree Search
        • ๐Ÿ’ฏ Policy Evaluation
        • โ™ป๏ธ Policy Iteration
        • โฐ Real-Time Dynamic Programming
        • ๐ŸŽณ Rollout
        • ๐Ÿ’Ž Value Iteration
        • ๐Ÿฉฐ A3C
        • ๐ŸŽญ Actor-Critic
        • ๐ŸŽฏ Advantage-Weighted Regression
        • ๐Ÿงจ DDPG
        • โš”๏ธ Deterministic Policy Gradient
        • ๐Ÿ› ๏ธ Monte Carlo Policy Gradient
        • ๐Ÿšœ Natural Policy Gradient
        • ๐ŸŽฉ Off-Policy Actor-Critic
        • ๐Ÿš‘ Off-Policy Policy Gradient
        • ๐Ÿš“ Policy Gradient
        • ๐Ÿ“ช Proximal Policy Optimization
        • ๐Ÿชถ Soft Actor-Critic
        • โœŒ๏ธ TD3
        • ๐Ÿฆ Trust Region Policy Optimization
        • ๐Ÿ”” Bellman Equation
        • ๐ŸŽ›๏ธ Control As Inference
        • ๐Ÿ’€ Deadly Triad
        • โœ๏ธ Linear Function Approximation
        • โ„๏ธ Semi-Gradient
      • ๐Ÿฆพ Robotics
      • ๐Ÿ‘“ Vision-Robot Bridge
      • ๐Ÿ“ˆ Statistics
        • โ›ณ๏ธ Bayesian Optimization
        • ๐Ÿฅ‚ Conjugate
        • ๐ŸŽฒ Gaussian Process
        • ๐Ÿ’โ€โ™‚๏ธ Jeffrey's Prior
        • ๐Ÿช™ Binomial Model
        • ๐Ÿชœ Hierarchical Model
        • ๐Ÿ’ฟ Mixture Model
        • ๐Ÿ›Ž๏ธ Normal Model
        • ๐Ÿ›ฉ๏ธ Poisson Model
        • ๐Ÿšจ Regression Model
        • ๐Ÿ•น๏ธ Control Variate
        • โ™ป๏ธ Gibbs Sampling
        • ๐Ÿงฑ Grid Sampling
        • ๐Ÿช† Importance Sampling
        • โ˜„๏ธ Langevin Dynamics
        • ๐ŸŽฏ Markov Chain Monte Carlo
        • ๐ŸšŠ Metropolis-Hastings
        • ๐Ÿค” Monte Carlo Sampling

Folder: Computer-Vision

12 items under this folder.

  • Nov 12, 2024

    โŒ›๏ธ SIFT

    • Nov 12, 2024

      โ™ป๏ธ Convolution

      • Nov 12, 2024

        ๐Ÿฟ Gaussian Kernels

        • Nov 12, 2024

          ๐ŸŽจ Color

          • Nov 12, 2024

            ๐Ÿž๏ธ Image

            • Nov 12, 2024

              ๐Ÿ‘๏ธ Computer Vision

              • Nov 12, 2024

                ๐Ÿ‘๏ธโ€๐Ÿ—จ๏ธ Geometric Perception

                • Nov 12, 2024

                  ๐Ÿ”Ž Canny Edge Detection

                  • Nov 12, 2024

                    ๐Ÿ”ฅ Feature Matching

                    • Nov 12, 2024

                      ๐Ÿ”บ Image Pyramid

                      • Nov 12, 2024

                        ๐Ÿ–Œ๏ธ Image Morphing

                        • Nov 12, 2024

                          ๐Ÿคฟ Gradient Blending


                            Content by William Jiahua Liang, website created with Quartz, ยฉ 2024